Exam 3

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

You have two predictors, NFS and RSPAN. You want to know how much R2 will increase when we add NFS into a model with RSPAN.

(NFC, RSPAN) - (RSPAN) = NFC incremental Isolate NFS from the model with both NFS and RSPAN together.

In a multiple regression, we can make two hypotheses. Briefly describe the two, including the null and alternative

1. Determine the significance of each individual predictor. H0: The regression coefficient in the population is equal to 0 (𝑏*𝑗 = 0) | H1: The regression coefficient in the population is not equal to 0 (𝑏*𝑗 ≠ 0) 2. Determine the significance of the overall model. H0: The model does not explain a significant amount of variance in Y (R2* = 0) | H1: The model explains a significant amount of variance in Y (R2* ≠ 0)

What are the assumptions of ANOVA

1. Independence of Observation: can only be accomplished with random assignment 2. Population for each condition is normally distributed: as long as n's are high, ANOVA is robust to violations of this assumption 3. Homogeneity of Variance: if the n's for each group are approximately, ANOVA is robust violations of this assumptions

What are important notes re: transforming your data?

1. be transparent. include the details of your transforming steps in your manuscript 2. a-priori decisions about your transformation/anaysis plan 3. run analyses both ways, especially with/without outliers 4. use descriptives of raw data but you can use tranformed data in your anlyses 5. don't transform if you don't have to (i.e. if your assumptions are all not violated)

What are alternatives to transforming your data?

1. choose a nonparametric test that doesn't rely on the assumption of normally distributed data 2. use robust tests (e.g. the F-test for ANOVA is robust to normality if sample size is large and to homogeneity of variance in Ns are equal) 3. other robust procedures that include using a trimmed mean, which essentially takes the mean based on the distribution of scores after a percentage of scores have been removed from both extremes of the distirbution

What are the assumptions of multiple regression?

1. data points are independent 2. linear relation between Y and the independent variables 3. multivariate normality - residuals should be normally distributed 4. homoscedasticity - variance of the residuals should be equal across all values of the independent variables 5. no multicolinearity

Converting log(odds) results to probability

1. e^log(odds) 2. probability = odds/odds + 1

Correlations among predictors in a multiple regression

1. may have significant model with no significant predictors 2. may may interpretation difficulty (so what if the overall model is significant if we don't know which predictors contribute to that) 3. the standard error slopes may be inflated, meaning that the slope values aren't stable or trustworthy

Name two kinds of linear transformations

1. mean centering (simply subtract mean of variable from that varible - allows interpretation of the y-intercept for average predictor) 2. standardizing

How do you check for multicollinearity?

1. observe a correlation matrix; no predictors correlations should be greater than 0.80 2. tolerance and variance inflation factor (VIF) used to predict each predictor from all other predictors

How do you create a standardized composite variable?

1. z score all individual measures (e.g TR, SE, Ob) 2. Take an average of the z-scored variables 3. interpret the results as 0 = average, >0 = above average, and <0 = below average

The F-distribution is associated with how many degrees of freedom

2 df The first df is for the numerator (across) The second df is for the denominator (column)

Tolerance

A diagnostic method to identify multicollinearity, quantified by 1/VIF. values BELOW 0.2 are problematic

Bonferroni correction

A method for adjusting the per comparison error rate (so that it doesn't sum to a crazy high familywise error) α' = α/c Then run pairwise comparisons (t-tests) and use corrected α rather than 0.05

What is a simple way to control the familywise error rate in multiple comparisons

A method used to control familywise error rate when doing post-hoc test with ANOVA to determine where the effect(s) lie is to ascribe some default reduction in the alpha value, like alpha must equal .01 or .001 In other words, simply adjusting the per comparison error rate so that it doesn't multiply to a high familywise error (α = 1 - (1-α')c) may be problematic in that you end up either under or over correcting the familywise error rate.

What is an effect size

A standardized unit that represents the magnitude or strength of a given effect (e.g. "practical significance")

What is an omnibus test

A test for an overall effect, but does not provide specific information about which groups evidence this effect

What is an alternative to hold familywise error at a constant rate regardless of how many group differences there are?

Adjust the familywise error rate as you go - as significant differences are found

Newman Keuls Test

Adjusts the familywise error as you find significant differences 1. Order all means from smallest to largest 2. Find all pair-wise differences 3. Find Wr for each comparison based on the DISTANCE of the comparison rather than k 4. Starting with the largest difference, compare the absolute value of the pair-wise difference to the Wr ***r=number of levels between means*** 5. As soon as a level produces no significant differences, STOP e.g. G1, G2, G3, G4, G5 1st comparison would be the difference between means of G1 and G5. a distance of 5. compare that difference to the calculated Wr (r= 5, distance between the levels of the two means you're comparing). If that difference in means is greater than the Wr for that given pairwise comparison, you can move onto the next step. 2nd comparison would be examining the distance of 4. G1-G4 is distance of 4 and G2-G5 is a distance of 4, so pit the Wr against both of these differences. Calculate Wr (r would be 4)

Interpret this output from Newman Keuls Group Subset 1 a 2 b 3 c 4 d

All four group means are significantly different from one another

What does nested mean in nested model comparison?

All terms of a smaller model occur in a larger model - the variables in the smaller model are therefore considered "nested" within the larger model. Being nested a necessary condition for using most model comparison tests like likelihood ratio tests.

What is eta squared?

An ES for ANOVA that is the proportion of total variance that can be explained by group. This estimate is biased upward, though 71% of the variance in height is explained by group

What is omega squared?

An ES for ANOVA that is the proportion of total variance that can be explained by group. Unlike eta squared, omega squared is not biased so will likely be smaller than the eta squared ES estimate

A regression model that contains no predictors is also known as

An intercept-only model

What is science

An iterative process with goals to describe and predict phenomena

ANOVA

Analysis of variance. A statistical procedure that uses the F-ratio to test the overall fit of a linear model. In experimental research, this linear model tends to be defined in terms of group means and the resulting ANOVA is therefore an overall test of whether group means significantly differ. ANOVA is an omnibus test, meaning that results identify whether there is an overall difference between three+ group means, though will not identify which group's mean is significantly different from the others

What is a transformation

Any analyses you do to data that is not in its raw form

Tukey's Honestly Significant Difference

Approach controlling the familywise alpha at 0.05, similar to Fisher's LSD, but works better when more than 3 groups Tukey's HSD holds α constant regardless of the number of groups or number of group differences) 1. Get a q-value, which is found on the q-table using k (#groups) and dfwithin 2. Find HSD 3. Calculate pairwise mean differences between groups 4. Compare the absolute value of the mean differences to HSD. If absolute value is greater than the HSD, those two groups are significantly different

What is an alternative to logit transforming?

Arcsine is less extreme, meaning that it doesn't stretch the extreme values out as much.

A priori comparisons

Are a type of comparison used in ANOVA that is decided upon before the study begins, with error rates not being as high as the familywise error rate. The idea is that the researcher specifies specific comparisons a-priori

What creates untrustworthy beta coefficients in multiple regression?

As collinearity increases, the SEs of the b coefficients also increases. A large SE for b coefficients means that that the bs are more variable across samples and thus, the bs are less likley to represent the population. If the bs are variable that means also that the predictor equations will be unstable across samples

What is the underlying idea of post-hoc tests for ANOVA?

Attempt to control the type I error rate - each method is essentially doing this in different ways

You're interested in examining whether scores on the BDI and BAI are each predicting a significant portion of the variance in college dropout. Set up an appropriate hypothesis and outline the steps to test your research question.

BDI H0: The BDI regression coefficient in the population is equal to 0 (𝑏*𝑗 = 0) | H1: The BDI regression coefficient in the population is not equal to 0 (𝑏*𝑗 ≠ 0) Use a t-test with the observed slope (𝑏𝑗) the population-level slope (𝑏*𝑗). Note that because 𝑏*𝑗 is assumed under the null hypothesis to be equal to 0, it can drop out of the t-test equation. The standard error of the slope is the denominator. Complete the t-test equation, compare against the critical t-value and determine whether the slope is more extreme than the critical value BAI H0: The BAI regression coefficient in the population is equal to 0 (𝑏*𝑗 = 0) | H1: The BAI regression coefficient in the population is not equal to 0 (𝑏*𝑗 ≠ 0) Complete the same steps outlined with BDI Conclusions you can draw if calculated t is more extreme than critical t for BDI: Holding BAI constant, BDI significantly predicts college drop out Conceptualized as a Venn diagram: BDI predicts unique variance in college drop out

Is the three predictor model better than just income and women? [1] education + income : 5.379054 [2] education + income + women : 4.52322 [3] education + women: 2.564849 [4] education: 1.462599 [5] income + women: 2.109043 [6] income: 1.236671 Against denominator: Intercept only

BF education + income + women /income + women 4.52322/2.109043

Does income add to a model with just education? [1] education + income : 5.379054 [2] education + income + women : 4.52322 [3] education + women: 2.564849 [4] education: 1.462599 [5] income + women: 2.109043 [6] income: 1.236671 Against denominator: Intercept only

BF education+income / education 5.379054/1.462599

We decide to run a multiple regression, first standardizing all of our variables to allow to interpretation of the output and which slopes matter more. What is an important consideration to make when standardizing slopes and comparing importance of predictors?

Based on the output of standardized OLS multiple regression, we can determine which predictor has the largest slope and therefore matters more. But remember that when the data are uncorrelated, it is totally fine to do this. When there is multicollinearity -- your predictors are correlated -- we know that the estimates are untrustworthy

What are the values in the intercept-only model showing?

Because the intercept-only model is the null and shows the slope when all other predictors are equal to 0, the slope is simply the mean of the outcome variable. For example, if you had to guess what the outcome variable would be not knowing any predictors, you'd guess the mean (or the intercept)

Use BF to determine if there is more support for horsepower or weight in predicting mpg

Bf horsepower/weight = 56963.84/45657981 = 0.001

Bonferroni vs. Dunn-Sidak

Bonferroni is much more widely used as the number of comparisons increase, Dunn-Sidak will have more power than Bonferroni Both Bonferroni and Dunn-Sidak are criticized for being too conservative, thereby increasing the type II error rate or probability of not detecting an effect that is there

What is the difference between simple regression and multiple regression?

Broadly, regression is used when predicting an outcome given either a single predictor (simple regression) or multiple predictors (multiple regression). With simple regression, a model is fitted to the data with the minimum value derived from the methods of least squares (the squared differences between the line and the actual data points - the SS residual). With a multiple regression, the same logical underpinning of a simple regression apply though for every extra predictor included in the model, a coefficient is added such that each predictor has its own coefficient (denoted by beta). The outcome variable is then predicted by a combination of all the predictor variables multiplied by their respective coefficients (plus a residual term, or error).

What is important to change in reporting your data after Winsorizing?

Change df to reflect your new n

Imagine you are rolling a dice. If a dice is fair, 1/6 of your rolls should results in each number face-up. However you roll a die 60 times. You want to know if your results are different than you would expect given the null (e.g. chance). What test would you run here and why?

Chi square goodness of fit because you have categorical data and you're determining whether the results you got are consistent with the null, such that the results you got do not differ from chance.

Describe a few cautions re: data intervention

Data intervention conditionalizes the facts., Statements of fact now depend on the data intervention strategy You are responsible for ensuring the validity of the facts you enter into the literature Legitimate procedures are permissible, though must be fully disclosed in your report.

How can BF be applied to ANOVA?

Determine support for differences (or no differences) between groups H0: all just noise H1: something meaningful about group membership beyond just noise

What is the goal of statistics?

Determine the best model to explain our data, with most test formulas being some derivation of explained variance/error / unexplained

Multiple logistic regression

Determine the probability of a dichotomous outcome (e.g. depressed, not depressed | survival, death) given multiple predictors

What is the alpha denoted by for the error rate of each individual pairwise comparison?

Error rate per comparison alpha = α'

When df in the numerator is 1

F = t2

When would you use an F-test in multiple regression?

F-test is used to determine whether the overall multiple regression model fits the data relative to the null model (the mean of y)

Lets say you want to determine whether R2 increases when you add RSPAN to a model with NFC and Shape. What would you do

Find the R2 for the model with all three variables and the R2 for just two of the variables, and then determine the difference. That would leave you with the incremental contribution of the isolated variable

We know that if the predictors correlate with each other, the estimates are untrustworthy. What is a solution here when we want to examine the importance of individual predictors?

Force the regression coefficients to be independent using dominance analysis

Describe the full model, reduced models, and fully reduced model if you have two predictors

Full: y= b0 + b1x1 + b2x2 Reduced models: y= b0 +b2x2 AND y= b0+ b1x1 Fully reduced model is the null y = b0 DF = N -k-1 k=# predictors in the model

You have three measures of self-esteem, teacher-report (TR), self-report (SR), and observation (Ob). You're interested in determineing whether self-esteem predicts GPA. How might you go about this quesiton?

GPA~self-esteem you have three measures which can be made into a standardized composite variable. you would then interpret that GPA is either 0=average >0= above average <0=below average

Adjusted R2

Gives us some idea of how well our model generalizes and ideally we would like its value to be the same, or very close to, the value of R2

Chi square goodness of fit vs. test of independence

Goodness of fit is a non-parametric test that determines whether there is a "good fit" between observed categorical data and expected frequencies. (e.g. i got x% right is that different from chance?) Test of independence is a non-parametric test that determines whether the distribution of one categorical variable is contingent on the distribution of another categorical variable

Interpret this output from Newman Keuls Group Subset 1 a 2 ab 3 b 4 c

Group 1, 3, and 4 are significantly different from one another but group 2 is not significantly different from group 1 and 3 - only group 4

Interpret this output from Newman Keuls Group Subset 1 a 2 a 3 b 4 b

Groups 1 and 2 do not have significantly different means from each other Groups 3 and 4 do not have significantly different means from one another Groups 1 and 2 vs. Groups 3 and 4 have different means

ANOVA hypotheses

H0: All groups come from the same population (image without alpha j) µ1 = µ2 = .... µk H1: do not come from the same population (image, which includes alpha j and error)

You are interested in examining predictors of college drop out, and whether your model with BDI, BAI, and GPA explains the variance in drop out. Set up an appropriate hypothesis and outline the steps to test your research question.

H0: The model does not explain a significant amount of variance in college drop out (R2* = 0) | H1: The model explains a significant amount of variance in college drop out (R2* ≠ 0) Use an F-test to compare the explained variance relative to the unexplained variance

You're interested in predicting a car's mpg from features of the vehicle. You find that horse power significantly predicts mpg, but you want to know if adding weight as another predictor improve the model. What would your steps be in answering this question?

H0: the model with both horsepower and weight does not explain a significant amount of variance in mpg relative to the model with just horsepower, R2 = 0 H1: the model with both horsepower and weight explains a significant amount of variance in mpg relative to the model with just horsepower, R2 ≠0 You're going to use an F-test to examine the reduced model (just horsepower) vs. the full model (horsepower and weight). You would use the F-test and appropriate degrees of freedom.

If you have 10 pairwise comparisons, what would the Bonferroni corrected alpha level be for each pairwise comparison?

If 10 comparisons α' = .05/10 = 0.005

What does multicollinearity do to the size of R in multiple regression?

If a single variable predicts an outcome fairly well (R=.7) and then you add another variable that also predicts variance in the outcome well (R = .8), we know that there will be a correlation between these predictors given that R adds up to 1. Therefore, we know that the unique variance of each predictor will be much smaller in a multiple regression relative to the separate Rs. If the two predictors are uncorrelated, then the variance accounted for by the second predictor is different to that of the other predictor (so when both uncorrelated predictors are entered into the model, the R is substantially larger than it would be if the predictors were correlated)

How can we use BF for non-nested model comparison?

If we want to look at more support for one predictor over another

What does ES tell you in an ANOVA

If you have a significant F-test, you don't know whether these differences have meaningful importance. The ES tells us the magnitude of the effect 65% of variance in height is explained by group membership

What would the difference in your multiple R2s be if you were to run single regressions on different predictors vs. a multiple regression with all predictors in the model if your predictors had no multicollinearity

If you were to add up each R2 in the single regressions, that value would add up to equal the multiple R2 value from your multiple regression. This is because the multiple regression gives the unique variance of each predictor variable. If there is no correlation between predictors, the multiple regression results (showing the unique variance of each predictor in multiple R2) will equal the added-up variances from single regression R2 a + R2 b = multiple R2

When would you choose to do mean centering?

If you wish to make an interpretation of the y intercept, such that if the mean of your predictor(s) is 0, you can determine what your expected outcome will be -- across all predictors). You do this by subtracting the mean of the variable from every data point

Why would you choose to do non-linear transformations?

If your data violate assumptions Non-normal data Lack of Homogeneity of Variance If your data as range restriction Proportion data Goal with non-linear transformations is to make the data more normal.

What is the point of standardizing beta coefficients in a multiple regression?

In standardizing, the slope represents: With a 1-standard deviation increase in X you would expect on average a b standard deviation increase in y When the data are standardized the intercept will become 0

Variance inflation factor (VIF)

Indicates whether a predictor has a strong linear relationship with the other predictors (aka multicollinearity). A value over 1 is potentially problematic; 10 is definitely bad

Interpret the following results: Titanic Survival Data: (Survival ~ Sex + Age + Fare) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.934841 0.239101 3.910 9.24e-05 *** Sex(male) -2.347599 0.189956 -12.359 < 2e-16 *** Age -0.010570 0.006498 -1.627 0.104 Fare 0.012773 0.002696 4.738 2.16e-06 *** Predicted Log(odds) = 0.93 - 2.34 (male) - 0.01 (Age) + 0.01(Fare)

Intercept- When all predictors are held at 0, we expect that, on average, likelihood of survival is 0.93 Sex- Males have a 2.3 log odds less chance of survival (male = 1) holding all other predictors constant Age- For a 1 unit change in age, there will be, on average, a 0.01 decrease units of survival holding all other predictors constant

Scheffe Test

It is the most conservative of the mean comparison tests 1. Calculate Scheffe's 2. Calculate all pairwise mean differences 3. If the absolute value of the mean difference for each group is greater than the Scheffe's value, those two groups are significant different

Why isn't ANOVA called analysis of means?

It may seem odd that the technique is called "Analysis of Variance" rather than "Analysis of Means." As you will see, the name is appropriate because inferences about means are made by analyzing variance

In dominance analysis step 2, the columns you fill in related to K are

K is the number of predictors, moving from 0-->1 and 1--> 2 and then 2-->3 for each column predictor. With each adding of predictors to the model, you're saying that for rspan, when I go from 1 predictor (rspan) to 2 predictors (the average of the 2-predictor models, both nf and rspan and then shape and rspan) this is how much R2 improves relative to the 1-predictor model

You use the alpha prime of the error rate per comparison in determining what the familywise error rate will be. Why?

Knowing the familywise error rate using the alpha prime (0.05, usually) from the pairwise comparisons helps you determine/quantify how much your probability of a type 1 error will jump to if you increase your comparisons

What is the difference between linear and nonlinear transformation

Linear transformation simply shifts the data, but does not change the internal structure and relative relation between data points is not changed. A nonlinear transformation actually changes the data (e.g. changes the shape of the distribution)

You use a go-no-go task to determine the percent of time a participant doesn't go. You're interested in doing analyses on this data, but recognize a few potential problems. What are they, and how might you get around them?

Logit transform this data. This data is proportion data, meaning that it is bounded between 0 and 1. Because it is bounded, we may encounter floor/ceiling effects wherein it's actually harder to get extreme values. You want to represent this in your data by making the distance between .30-.70 the smaller differences at the extremes if there were floor/ceiling effects, you'd stretch them out because you assume that there are meaningful differences in those small extreme values. if the task is really easy, for example, we would expect that a lot of people would get almost perfect scores, but that is simply attributable to ceiling effects. therefore you would stretch those scores out to determine differences

We conduct BF and find that the model with weight and horsepower is better than simply one with horsepower. Now we want to determine whether if the car having an automatic transmission (am) improves the model any more

Model (wt + hp + am)/Model(wt + hp) = 312380065/788547604 = 0.396 Inconclusive whether am improves the model

Using BF, how would you determine whether adding weight to hp improves the model?

Model (wt + hp)/ Model(hp) = 788547604/56963.84 = 1384.95 Lots of evidence in favor of the more complex model

What is the idea behind model comparison in multiple regression?

Model comparison in multiple regression determines whether the model with all predictors (the more complex model) is a better fit to the data/explains more variance in the outcome relative to the fully reduced model (the null, with just the mean of y)

How does multicollinearity affect our ability to interpret each predictor?

Multicollinearity makes it difficult to assess the individual importance of a predictor.

What is the difference between multiple R and R2 in multiple regression?

Multiple R is the correlation between the observed values of Y and the values of Y predicted by the multiple regression. Large values of multiple R = large correlation between the predicted and observed values of the outcome. R2 is the percent of variance in the outcome explained by the regression model

What will non-linear transformations do to your data?

Non-linear transformations will transform the basic structure of the data

Point biserial correlation

One dichotomous discrete variable one continuous (e.g. relation between age and pregnancy)

Interpret the following: outcome homeruns Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.007238 0.014540 -0.498 0.6212 Doubles 0.881894 0.348409 2.531 0.0151 * Triples -0.801010 0.520890 -1.538 0.1314 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.0211 on 43 degrees of freedom Multiple R-squared: 0.1317, Adjusted R-squared: 0.09134 F-statistic: 3.262 on 2 and 43 DF, p-value: 0.04798

Overall, the model is statistically significant F(2,43)= 3.262, p = 0.04798 Intercept- When a player bats 0 doubles and 0 triples, we can expect that, on average, homeruns will be -0.007238 holding all other predictors constant Doubles- For a 1 unit increase in doubles, homeruns will increase by 0.88 holding all other predictors constant Triples- For a 1 unit increase in triples, homerunes will decrease by 0.80 holding all other predictors constant

How might you determine the probability of finding at least one significant result by chance?

P(at least one significant result) = 1 − P(no significant results) = 1 − (1 − 0.05)20 ≈ 0.64 = 64% chance!!!!

Post-hoc comparisons

Post hoc tests are designed for situations in which the researcher has already obtained a significant omnibus F-test with a factor that consists of three or more means and additional exploration of the differences among means is needed to provide specific information on which means are significantly different from each other Decided upon after looking at group means and observing a significant effect

Predicted Log(odds) = 0.93 - 2.34 (male) - 0.01 (Age) + 0.01(Fare) What is the probability of survival for a male, age 45, who paid 8 euros?

Predicted Log(odds) = 0.93 - 2.34 (1) - 0.01 (45) + 0.01(8) Predicted log(odds) = -1.78 e^log(odds) = odds = 0.16 probability = odds/odds+1 = probability = 13.7%

What are the pros and cons of standardized composite variables?

Pros: 1 single predictor that is easy to interpret More representative than any single measure Cons: Measurement error is compounded Better methods for combining

Since our results from the ANOVA only tells us whether an effect exists, not where the effect is found (between which groups), what is one way to determine where the effect lies but is REALLY bad.

Run t-test for all pairwise comparisons. The problem with doing this though is that it massively increases type I error rate, α = 0.05 per test

Describe multiple regression in a Venn diagram, including Sum of Squares

SStotal is the entire portion of Y | difference between the observed values and the mean value of the outcome variable SSregression is the portion of Y (outcome) explained by individual predictors, X1 and X2 | the difference between the values of Y predicted by the model and the mean value SSresidual is the portion of Y and both X1 and X2 that is not included in the overlap | the difference between the values of Y predicted by the model and the observed values

Fisher's Least Significant Difference

Should only be used with a significant ANOVA and for 3 groups. It doesn't properly control alpha at 0.05 with more than 3 groups Essentially, LSD represents the minimum distance two means could be from one another and still be significant 1. Use the formula to calcuate LSD 2. Calculate all pairwise mean differences for each of the parwise comparisons (e.g. group 1 & group 2, group 1 & group 3, group 2 & group 3) 3. Determine if the absolute value of each pairwise comparison mean difference is greater than the LSD, in which case those two groups are significantly different

Do people rate others differently on an attractiveness scale after they get to know them? What test?

Sign test H0: there is no systematic change in attractiveness ratings from time 1 to time 2 H1: there is a systematic change in attractiveness ratings from time 1 to time 2

All pairwise comparisons

Similar in function to the post-hoc comparisons wherein we discern where the effect is between group means. With post-hoc we determine analyses after we run our data whereas with all pairwise comparisons, you can decide a priori or post-hoc

F-ratio/F-statistic

Similar to the t-statistic in that it compares the amount of systematic variance to the amount of unsystematic variance. For example, the F-ratio only tells us the the experimental manipulation has had some effect, but it doesn't tell us specifically what the effect was or which group(s) to attribute mean differences to

F ratio in regression

Simply a measure of the amount that the model has improved the prediction of the outcome compared to the level of inaccuracy of the model (residual error). If the model is good, we then expect improvement in unexplained variance when adding the predictors as opposed to simply using the intercept-only model

Why is it that if n's are equal in ANOVA, the test is relatively robust to the homogeneity of variance assumption?

Since n = average n per group, if n is equal for all groups, no problems and each group will be weighted the same. But if ns are not equal, the variance of some groups will be weighted much more than it should be

Dunn-Sidak Correction

The Dunn-Sidak correction is similar to Bonferroni in that it adjusts the pairwise comparison alpha value, just with a different correction equation α' = 1 - (1-α)1/c

When is the f-distribution used, generally?

The F-distribution is used most frequently in situations where two variances are being compared Used for Model Comparison with multiple regression as well as in the One-way ANOVA

What does the F-test determine in multiple regression?

The F-test determines the overall significance. It compares a model with no predictors to the model that you specify.

MS between

The average amount of variance explained by the model, quantified by SSb/dfb

ANOVA, broadly, is used to examine

The differences between two or more means (if two means, with 1st df = 1, we can do a t-test)

How does the F ratio operate similarly across regression and ANOVA?

The f-ratio is used to examine overall model fit of our model given observed data. For both a regression and an ANOVA, the f-test helps us quantify the amount of variance explained by our model relative to unexplained variance.

If you compared three groups' means, what would the family be?

The family would contain these three pairwise comparisons G1 v. G2 G1 v. G3 G2 v. G3

Interpret the following: outcome is hours of exercise Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.29773 0.45787 -7.202 1.21e-12 *** age 0.43867 0.03779 11.608 < 2e-16 *** patient group 1.30508 0.21522 6.064 1.92e-09 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 3.207 on 942 degrees of freedom Multiple R-squared: 0.1599, Adjusted R-squared: 0.1581 F-statistic: 89.67 on 2 and 942 DF, p-value: < 2.2e-16

The model is statistically significant F(2,942)= 89.67, p < 2.2e-16. Intercept- When age is 0 and group is control, we expect that, on average, the hours of exercise per week will be -3.29 holding all other predictors constant Age- For a 1 unit increase in age, exercise will on average increase by .44 holding all other predictors constant Patient group- For those in the patient condition, exercise will be, on average, 1.30508 units higher than exercise in the control group holding all other predictors constant

What influences the number of pairwise comparions in your family?

The more groups included in your ANOVA, the more pair-wise comparisons in your family

R2 definition

The unique variance in the outcome for which the predictors account

Multiple R2

The variance in y explained by the model (all predictors) as a whole. This is what the F-test determines in the R output, with associated dfs (k, n-k-1, with k=#predictors in the model). The F-test is used to determine overall model significance because F is the ratio of variance explained/variance unexplained

A researcher ran a multiple regression model with 3 predictors and found that the model had a significant multiple R2, but none of the three predictors had significant slopes. Explain how this could occur and explain a Venn diagram showing the relationship between the DV and these 3 predictors

This suggests multicollinearity, such that the three predictor variables are correlated with each other likely at .8 or higher. Multicollinearity increases the standard error, therefore the slope of your estimates will be highly variable between samples. Having variable slope estimates between samples is problematic because this leaves you with a largely uninterpretable model and compromises the "trustworthiness" of your slopes. The Venn diagram would show a significant overlapping variance among your predictor variables.

Winsorizing

Used as an alternative to simply removing your outliers that are causing your data to be skewed. When you winsorize your data, instead of completely removing those points, you replace them with the most extreme, non-outlier value, in your data set. You set a percentage of each tail that will be replaced

In ANOVA, how are pairwise comparisons evaluated?

Using t-tests with a method to control/adjust the alpha rate per comparisons so not to inflate familywise error rate

How does bootstrapping help skirt the use of transformations?

Very neat way to get around the problem of not knowing the share of the sampling distribution becuase bootstrapping takes smaller samples with replacement from the data, treating the samples as a population. By estimating the properties of the sampling distribution from the sample data, bootstrapping gets around the problem of lack of normality. In effect, the sample data are treated like a population from which smaller samples (called boostraps) are drawn (with replacement). Means are calculated from the new sampling distribution made from your samples and the SE is found which allows for CIs and significance estimates.

What are pairwise comparisons, broadly?

We are comparing all possible combinations of our groups because we want to examine where an effect lies. The important thing is to use post-hoc tests (e.g. either adjusting the pairwise alpha rate per comparison or set the alpha rate for the whole family - either way with the idea to control the familywise error rate) to correct for all possible comparisons

What is the point of determining the significance of the overall regression model?

We are interested in whether the regression model can generalize to other samples, by using an F-ratio of variance explained (df) / variance unexplained (df)

Why does type 1 error increase when doing multiple comparisons?

We are performing hypothesis tests many times on the same set of data Each hypothesis test has a "built-in" error rate

What is the final step of the dominance analysis, after finding incremental increase in R2 and determining how much R2 changes when we add predictors to the models (0-1, 1-2, 2-3, etc.)

We average all of the columns, determining which average is greatest.

What is the simplest model we can fit to ANOVA, and why?

We can say that we can predict an individual data point given the grand mean, which is the mean of all data collapsed across groups. In other words, group doesn't add anything meaningful in helping us predict the value of an individual data point

How does the CLT apply to transformations?

We know that if we have a high N, our sampling distribution should become more and more noraml (given that the population is assumed to be normally distributed). Therefore, if we have a large N, we are more likely to not need to transform because our sampling distribution will be relatively normal

Why is it important to consider the trade-offs between type I and type II error rates for post-hoc procedures

We know that type I error and statistical power are linked. Therefore, if we have a test that is particularly conservative and thereby controls type I error, it may on the other hand inflate type II error and have less power to detect an effect. If we run post-hoc comparisons on an ANOVA using Bonferroni for example (a conservative test), we run the risk of rejecting differences between means that are, in reality, meaningful.

A researcher now wants to run Bayesian Model comparison to determine how much support (if any) there is for adding the clarity & color variables to the model that includes the dimensional features. Run the appropriate model comparison using the provided Bayes Factors. Dimensional Features BF = 774823 Clarity + Color BF = 434691 Dimensional Features + Clarity + Color BF = 215973700 Against Denominator: Intercept Only

We want see if adding clarity and color to the dimensional model, we'll need to compare the alternative model (all three variables) to the model with just dimensional features. BF = 215973700/774823 BF = 278.74

Multicollinearity

When a predictor has a strong linear relationship with the other predictors

Sign test

When comparing 2 groups or 2 times points, care only about the direction of change (the sign) not the magnitude (ignore tied scores)

When will the error rate per comparison and the familywise error rate be equal?

When only 1 comparison is made, the error rate per comparison and familywise error rate will be equal, but as the number of comparisons increases they will begin to diverge

When does r equal beta?

When you have only one predictor variable in your model, then beta is equivalent to the correlation coefficient (r) between the predictor and the criterion variable.

You look at your data, and see that it is highly skewed and this is attributable to a few outliers. You don't want to totally remove the outliers from your analyses alltogether. What can you do?

Winsorizing is a type of non-linear transformation used when data are highly skewed due to outliers. When you winsorize your data, instead of completely removing those points, you replace them with the most extreme, non-outlier value, in your data set. You determine a percentage of each tail in your data that you will replace.

You are interested in looking at the number of times a child displays a delinquent behavior, counting the frequency. You look at the data and see that, because it's count data, it's positively skewed. What can you do to the data to help normalize it?

With count, discrete data, (# of times someone does x) you can square root transform it. By taking the square root of every individual data point, this improves skewness and also homogeneity of variance.

What is the idea behind the F-model comparison formula?

You are assessing the change in R2 between the full model with all predictors and the reduced model with just one predictor divided by the change in dfs divided by denominator Essentially, want to know if R2 is increasing "enough" to justify adding another predictor to the model

What is the general idea underlying linear transformation? What happens to your means, standard deviations, and variances?

You are either adding/subtracting or multiplying/dividing each data point (x) by a constant. adding/subtracting: mean is changed, but standard deviation and variance stay the same (because the distance between each datapoint stays the same) multiplying/dividing: multiplying/dividing your mean by that same constant that you're multiplying/dividing your data points by. Your standard deviation and variance change, such that the new variance = constant squared times the old variance

Do juries show gender bias? Is the likelihood of giving a guilty verdict at all contingent on the defendant's gender?

You are looking at guilty vs. not guilty verdict by gender. Both of these variables are categorical so non-parametric testing is appropriate. You want to know whether the guilty verdict is contingent upon gender

What is an alternative approach to adjusting the alpha level for the pairwise comparison?

You can determine what differences would be needed in order to detect a significant result, holding the familywise error rate at 0.05

You have extremely positively skewed data after collecting reaction time data. You'd like to do NHST on this data, but are concerned about it violating assumptions of normality. What can you do?

You can either bootstrap or given that the data is continuous and non-discrete (reaction time), we can log transform the data. by log transforming, we take the natural log (ln) of every single individual data point.

If the P value for the F-test in a multiple regression is less than .05, what can you conclude?

You can reject the null-hypothesis and conclude that your model provides a better fit than the intercept-only model

What are the steps in nested model comparisons?

You have a full model, including b1x1 and b2x2. 1. full model vs. reduced model with just b2x2 2. full model vs. reduced model with just b1x1 3. fuller model (just b1x1) vs. null 4. fuller model (just b2x2) vs. null

You have a dataset, and you'd like to know what your expected outcome would be across all predictors. What would you do?

You would mean center your data. You'd subtract the mean of the variable from the variable itself. This way, you can determine your outcome for an average of the predictor(s)

You're interested in knowing what an average student in both math and reading would be expected to score on end-of-the-year exams. What would you need to do to this data to answer this question?

You're interested in the expected outcome across all predictors (e.g. the "average" student). Therefore, mean centering could be really useful because that allows you to make claims about your intercept given an average predictor.

You have count, discrete data (e.g. data involving frequencies). What is the possible problem in using this data for analyses?

Your count/frequency data is likely to be positively skewed and therefore, you want to consider non-linear transforming. Square root transforming will help normalize the count data and also

What would the difference in your multiple R2s be if you were to run single regressions on different predictors vs. a multiple regression with all predictors in the model if your predictors had multicollinearity

Your multiple R2 will be smaller than the summed R2s from your single regression output becuase the multiple R2 is only giving you the unique contribution of each predictor variable to the overall variance in the outcome whereas your single regression R2 summed together R2a + R2b > multiple R2

If the P value of the overall F-test is significant, your regression model

Your overall model predicts the response variable better than the mean of the response.

If you run a multiple regression in the bayes factor package, it will provide you

all possible combinations of the predictors, relative to the null BF10 = Alternative/Null

Alpha prime

alpha rate for a single comparison

What is the standard formula for multiple regression line

b0 = intercept, the expected value of y when all predictors are 0 b1 = slope for x1, the expected change in y for a 1-unit change in x1, holding all other predictors constant b2 = slope for x2, the expected change in y for a 1-unit change in x2, holding all other predictors constant R2 = the variance in Y explained by the whole model with all predictors k = the number of individual predictors

If you didn't want to do a non-linear transformation on your data, what could you do instead?

bootstrapping, treating your sample like your population and determining your parameters of interest

If the F-value is large enough in ANOVA

claim that the "between" part matters, there are group differences

What is the best model that can be made with these variables? [1] education + income : 5.379054 [2] education + income + women : 4.52322 [3] education + women: 2.564849 [4] education: 1.462599 [5] income + women: 2.109043 [6] income: 1.236671 Against denominator: Intercept only

education + income

y-hat..

grand mean across predictors

What is the most basic difference between linear transformation and non-linear transformation

he linear transformation just applies to transformations that either multiple/divide by a constant or add/subtract by a constant. That's it — nothing more (e.g. log/square root/logit). With non-linear transformation, we're actually changing the basic structure of our data by using log/square root/logit functions.

in ANOVA, what symbol denotes the individual datapoint

i = individual, individual data point

Describe how you would change this equation in a linear transformation with different outcomes in mind. new data point = constant1(old data point) + constant2

if I were to multiple the old x by a value less than 1, that is the same thing as dividing if I were to add a negative number, that is the same as subtracting if I don't wish to multiply or divide, I'd make constant1=1 if I didn't wish to add/subtract, I'd make constant2=0

What does a significant ANOVA mean?

if the differences between your group means (2 or more for an ANOVA) are large enough, the resulting model (which is predicting datapoints given the group means) will be a better fit to the data than the grand mean alone

What would the standard multiple regression output tell us?

it would tell us the overall model significance and the significance of each individual predictor but we would not be able to determine which predictor "matters most" in predicting variance

in ANOVA, what symbol denotes group membership

j = group, j will be 1 to k, where k equals the number of groups

ANOVAs are used to compare

mean differences from k groups K = number of groups, no maximum

y-hat.j

mean of each individual group

Log transformation corrects for

positive skew, unequal variance

Square root transformation corrects for

positive skew, unequal variance

Describe the basic structure of ANOVA equation

predicting an individual data point using the grand mean + difference for group (alpha.j) + difference for individual (eij) essentially, the ANOVA asks whether you can improve your estimate based on group membership

Error rate per comparison

probability of making a type 1 error on any given comparison

Familywise error rate

probability that a family of conclusions will contain at least one type 1 error

SSbetween ANOVA

sum of squares between describes the variance that our ANOVA model can explain

SSwithin ANOVA

sum of squares within denotes the variance that cannot be explained by our ANOVA model

When would you use a t-test in multiple regression?

t-test is used for the first hypothesis in multiple regression, determining the significance of each slope.

MS within

the average amount of variance explained by extraneous variables (e.g. the unsystematic variance)

in ANOVA, n is

the average same size (take the average of sample size across groups)

yij

the data point for a specific individual in a specific group

in ANOVA, what is eij

the error or individual difference yij - y-hat.. - alpha.j

In dominance analysis step 1, the columns you fill in are

the incremental increases of the column variable when added to the row variable(s)

What is the grand mean in ANOVA

the mean of the outcome variable collapsed across groups, which is the simplest model we can fit to the data in ANOVA.

The dominance scores add up to

the model R2s from the R-output with all three models

in ANOVA, k is

the number of groups

in ANOVA, N is

the number of individual observations

SStotal ANOVA

the total variation within our ANOVA model, how much variance our model needs to explain

Why does standard OLS multiple regression not tell us which predictor is most important?

the variables may have different scales; a 1-unit change in a predictor may be very different that a 1-unit change in another predictor

What is the difference between alpha.j and yhat.j in ANOVA?

y.j is the mean of each group whereas alpha.j is the difference from the grand mean for a particular group 72inches = 68inches + 2inches + 8 inches yij = y-hat.. + alpha.j + eij

y1,4 = y1,2 = y3,1 =

y1,4 = the 1st data point in group 4 y1,2 = the 1st data point in group 2 y3,1 = the 3rd data point in group 1

What are your options when assumptions of normality are not met

you can use nonparametric tests: Spearmans ( bootstraping transforming! non-linear!

Imagine you do 3 comparisons, what would your familywise error rate be?

α = 1 - (1-α')c c=3 1 - (1-.05)3 = 0.14

If you have 3 pairwise comparisons, what would the Bonferroni corrected alpha level be for each pairwise comparison?

α' = .05/3 = 0.017


Kaugnay na mga set ng pag-aaral

Foundations of Nursing Chapters 1-3 Study Guide

View Set

Chapter 25: Growth and Development of the Preschool Child: 3 to 6 Years

View Set

Insurance and Licensing Chapters 5-6

View Set

Chapter 9 - Lifespan Development

View Set

TestOut LabSim Chapter 14 - Security

View Set

Forest Econ Practice Midterm and Quizzes

View Set