Methods & Statistics

¡Supera tus tareas y exámenes ahora con Quizwiz!

What should we know about CFA and dimensionality of relationships? (Kline, 2016)

"The specifications of standard CFA models where (1) each observed variable is a simple indicator that depends on a single factor and (2) the errors are independent describe unidimensional measurement. It is multidimensional measurement that is specified in nonstandard CFA models, which have at least one complex indicator that is caused by two or more factors or have at least one error correlation," (p. 195).

What is the core question at the heart of the Benjamini-Hochberg Test? (Howell, 2009)

"What percentage of the significant results ("discoveries") that we have found are false discoveries?"

According to Howell (2009), what are three factors that affect the power of a test?

(1) As a function of Alpha, if we increase Alpha, our cutoff point moves to the left, simultaenously decreasing Beta and increasing power, although with a corresponding rise in Type I error. (2) Lower the precision of your estimate (e.g., make the difference between the null and your alternative hypothesis wider), thus increasing power, but though there is still a sizable probability of Type II error. (3) As variance of sampling distribution decreases (function of sample size increase or variance decreasing), power increases.

How would we statistically determine if a variable is a moderator? (Cohen et al., 2003)

(1) Center the predictors by subjecting all values of a variable by its relevant mean. (2) Perform a hierarchical regression. Step 1: Regress Y on to X. Step 2: Step 2: Add Z. Step 3: Add X & Z interaction term. If level 3 is significant, then sig. main effect and moderation. If not, then look at single-order effects (Step 1 & 2). (3) Calculate simple slopes (regression lines for y~x based on two points on Z, usually +/- 1 SD) to evaluate moderation relationship.

What are the assumptions of multiple linear regression? (Cohen et al., 2003)

(1) Correct specification of the form (mathematical shape) of the relationship between IVs and DVs (e.g., linear vs. polynomial). (2) Correct specification of the IVs in the regression model (all variables identified by theory are included, properly measured, and IV/DV forms have been specified). (3) No Measurement Error in the IV (perfect reliability). (4) Homoscedasticity (The variance of the residuals around the regression line is assumed to be constant regardless of the value of X). (5) Independence of Residuals (No relationship among the residuals for any subset of cases in the analysis). (6) Normality of distribution of residuals.

What are the five major areas subjective decision points that one is faced when conducting an EFA and what can be done to reduce the subjectivity? (Fabrigar et al., 1999)

(1) Decide variables to include in study, and size & nature of sample to be studied (Use at least 4 measured variables per common factor expected, perhaps as many as 6 given usually considerable uncertainty about common factors). (2) Determine if EFA is most apprpriate form of analysis given goals (Go w/ EFA if primary goal to identify latent variables and there is insufficient basis to specify an a priori model, also if PCA isn't goal either). (3) If appropriate, which specific procedure to fit the data? (Typically go with maximum likelihood, as it allows computation of wide range of indexes of goodness of fit, and also sig. testing of factor loading and correlations among factors). (4) How many factors should be included? (Use multiple methods). (5) Which factor rotation to use? (Oblique vs. Orthogonal).

What are Fabrigar & Wegener (2012) recommendations for indicator selection for CFA?

(1) Define the hypothetical constructs of interest. (2) Identify candidate indicators that as a set adequately sample the various domains. Ideally, not all indicators of the same factor will rely on the same measurement method. Note: Minimum number of indicators per factor for CFA models with two or more factors is two. Better practical minimum is 3-5 per anticipated factor.

What are the differences between PCA and FA? (Tabachnick & Fidell, 2014)

(1) Differences involves contents of the positive diagonal in the data matrix. In either, variance analyzed is the sum of the values in the positive diagonal. PCA, ones are in the diagonal and there is as much variance to be analyzed as there are observed variables; each variable contributes a unit of variance by contributing a 1 to the positive diagonal. All the variance is distributed to components, including error and unique variance for each observed variables. IF all components retained, PCA duplicates exactly the observed correlation matrix and standard scores. (2) FA, only variance that each observed variable shares with other observed variables is available for analysis. Exclusion of error and unique variance from FA is based on the belief that such variance only confuses the picture of underlying processes. Shared variance is estimated by communalities, values between 0 and 1 that are inserted in the positive diagonal. The FA solution concentrates on variables with high communality values. (3) PCA analyzes variance; FA analyzes covariance (communality). (4) PCA goal is to extract maximum variance from a data set with a few orthogonal components; FA to reproduce the correlation matrix with a few orthogonal factors. (5) PCA is a unique mathematic solution; FA is often not.

What are the characteristics of a standard CFA model? (Kline, 2016)

(1) Each indicator is continuous with two causes—a single factor that the indicator is supposed to measure and all unique sources of influence represented by the error term. (2) The error terms are independent of each other and of the factors. (3) All associations are linear and the factors covary. Also, due to this modeling, and the restricted measurement model is identified through their specification, there is no rotation phase in CFA.

What are residuals within the context of regression? (Howell, 2009)

They are the errors of prediction one gets (in terms of Y-Y-hat) when trying to get a line of best fit.

What is the impact that range restriction has on correlations? (Howell, 2009)

This happens when the values for X or Y are intentionally or unintentionally constrained. This commonly causes r to shrink. With the exception of very unusual circumstances, restring range of X will increase r only when restirction results eliminates curvilinearity.

What is the difference between a one- and a two-tailed test? (Howell, 2009)

This is a matter if we are looking for extreme values on the extremely high, extremely low, or both sides. If we are rejecting only one of these, we are doing a *one-tailed*, or *directional*, *test*. If we are rejecting both sides, this is a *two-tailed*, or *nondirectional*, *test*. For a one-tailed, we set Alpha to .05 for the side we're interested in. For two-tailed, we set the alphas to .025 for both sides so they add up together as .05.

What is an adjusted correlation coefficient? (Howell, 2009)

This is the correlation that has been corrected for bias due to sample size.

What is a correlation, in relationship to a regression line? (Howell, 2009)

This is the degree that the points cluster around the regression line, and is a statistic representing the relationship between X & Y. It's max values are |1.00| and it is the degree to which the covariance reaches its maximum. It is a measure of strength, not percentage of prediction. This is not an unbiased estimate of the correlation coefficient in the population. That is Greek letter Rho.

What is the definition of covariance? (Howell, 2009)

This is the degree to which two variables vary together. It is (SIGMA(X-Xbar)(Y-Ybar))/(N-1). Alternately, (SIGMA(XY)-(SIGMAXSIGMAY)/N)/(N-1).

What is the defintion of "familywise error rate"? (Howell, 2009)

This is the probability that a family (set of comparisons being done) of conclusions will contain *at least* one Type I error.

What is exact collinearity in regression? (Cohen et al., 2003).

This is when one IV has a correlation or multiple correlation of 1.0 with the other Ivs. This indicates a mistake being made in the setting up of the regression equation. This could mean you were using the same variable twice. It could be a third variable uses data directly from the other two predictors, and thus has no unique data to itself.

What is the meaning of "sampling error?" (Howell, 2009)

This refers to varaibility due to chance. It indicates that the numerical value of a sample statistic will be in error (I.e., will deviate from the parameter it is estimating) as a result of the particular observations that happen to be included in the sample. These "errors" for note are not due to carelessness or mistakes.

What is the goal of regression? (Howell, 2009)

To best predict the constants in equation that allow us to predict values of a criterion.

What are the possible ways that someone can evaluate how to select factors in EFA?

(1) Eigenvalues via the Kaiser method, which retains eigenvalues greater than or equal to 1.0. This is critiqued though by Russel (2002) as when a large numbre of items are included in FA, it is likely there will be relatively large # of factors with eigenvalues greater than or equal 1.0 that will be extracted. (2) Scree plot (pltos eigenvalues against factors. Factors in descending order are arranged along the abcissa with eigenvalue as the ordinate. Negatively decreasing. Look for for part in line where slope changes) (3) Parallel analysis (a. randomly generate data set w/ same number of cases and variables generated; b. PCA repeatedly performed to generate data set and all eigenvalues noted for each analysis; c. average eigenvalues for each component and compare results from the real data. Only components from real dta set whose eigenvalues exceed the averageeigenvalue from the randomly generated data are retained) (4) Theoretical foundation (does theory imply there should be something there specific?)

What are methods of establishing isolation of effects of a target IV for evaluating causation? (Higginbotham et al., 1988)

(1) Examine its effects within constant values of other potential causes. (2) Create the variable and apply it differentially to groups who are randomly assigned to condtiions (experimental design). (3) Measure and statistically control (partial) the effects of potential alternative explanatory variables.

What are some takeaways about PCA? (Tabachnick & Fidell, 2013)

(1) Goal is to extract maximum variance from the data set with each component. (2) The first principal component is the linear combination of observed variables that maximally separates subjects by maximizing the variance of their component scores. The second component is formed from residual correlations; it is the linear combination of observed variables that extracts maximum variability unrelated to the first component, and so forth. (3) Ordered, with first component extracting most variance, last one with the least variance. Solution is mathematically unique and, if all components are retained, exactly reproduces the observed correlation matrix. (4) The solution of choice for researcher who is primarily interested in reducing a large number of variables down to a smaller number of components. Also useful first step in FA where it reveals great deal about maximum number and nature of factors.

What are two things to keep in mind regarding causation, per Cohen et al. (2003)?

(1) If Y is dichotomous, we evaluate this as as change in proportion. (2) Third proposition should not assume that if you change X, Y will change. It does not always have to be the case because (a) X may not be manipulable; (b) Even when it can be, the way it is manipulated may determine whether and how Y changes, because of the nature of the nature of the manipulation may defeat or alter the normal causal mechanism whereby X operates.

What are some concerns that Fabrigar et al. (1999) said may come up within the realm of study design?

(1) If researcher inadequately samples measured variables from the domain of interest, they may fail to uncover important common factors. (2) If irrelevant measures are included, spurious common factors may emerge, or true common ones may be obscured.

What are three factors that influence the power of a test? (Cohen et al., 2003)

(1) Increase Alpha (e.g., from .01 to .05). This is because we are expanding the region of rejection of H0 as determined by the Alpha level and whether it is one- or two-tailed. (2) As sample size increases, the power increases. (3) The magnitude of the effect in the population, or the degree of departure from H0, can impact this. So the larger this is, the greater the power.

What are the two regression diagnostic statistics for evaluating for outliers?

(1) Leverage, which tells us how far the observed values for the case are from the mean values on the set of Ivs. Those further from the mean have a greater *potential* to influence the results of the equation. (2) Discrepancy, which is a measure fo the distance between the predicted and observed values on Y. This looks at whether a point has pulled the regression line towards it or not. Almost always use externally studentized residuals (This examines what would happen if the outlying case was deleted from the data set).

What are the three strategies for remedying multicollinearity in Cohen et al. (2003)?

(1) Model Respecification. (2) Collection of Additional Data. (3) Ridge Regression.

Describe the model respecification strategy for remedying multicollinearity? (Cohen et al., 2003)?

(1) Model respecification. This could involve combining variables in a single index (or maybe converting to z-scores). If prior theory or empirical work insists that the differential importance of each variable be maintained, then maybe used weighting schemes. Also, maybe drop one (or more) IV, though be careful.

What are the four steps for establishing mediation? (Baron & Kenny, 1986)

(1) M~X. (2) Y~X. (3) Y~M. (4). T oestablish that M completely mediates the X-Y relationship, the effect of X on Y controlling for M (path c') should be zero. Tested at same time as 3. Perfect mediation occurs if the IV has no effect when the mediator is controlled.

What are the preliminary steps to performing a CFA? (Furr & Bacharach, 2014)

(1) Specification of the measurement model (First must specify the number of dimensions, factors, or latent varaibles (represented by ovals) that are hypothesized to underlie the test's items (represented by rectangles). Guidelines: (a) At least one item is linked to each factor; (b) each item typically linked to only one latent variable). (2) Computations (First, actual variances and covariances. Then, parameter estimates (and inferential tests). Then, implied variances and covariances. Then, indices of model fit). (3) Ihnterpreting and Reporting Output. (4) Model Modificaiton and Reanalysis (if necessary).

What are two days of detecting multicollinearity in regression? (Cohen et al., 2003)

(1) The Variance Inflation Factor (VIF). This is an index of the amount of the variance of each regression coefficient is increased relative to the situation in which all the predictor variables are correlated. Common rule of thumb is that if VIF is greater than or equal to 10, you have evidence of severe multicollinearity involving the corresponding IV. (2) Tolerance. This is the reciprocal to VIF. It tells us how much of the variance in Xi is independent of other IVs. Common rule of thumb is .10 or less is a serious problem.

What are the two types of methods for assessing shrinkage?

*Empirical methods*, which estimate the average predictive power of a sample regression equation on other samples (cross-validation), and *analytical methods*, which adjust the statistical bias to yield the corrected sample R2.

What are the similarties shared between primary components analysis (PCA) and exploratory factor analysis (EFA)? (Tabachnick & Fidell, 2013)

(1) The specific goals of PCA/FA are to summarize patterns of correlations among observed variables, to reduce a large number of observed variables to a smaller number of factors, to provide an operational definition (a regression equation) for an underlying process by using observed variables, or to test a theory about the nature of underlying processes. (2) They both have considerable utility in reducing numerous variables down to a few factors. They produces several linear combinations of observed variables, where each linear combination is a factor. These summarize patterns of correlations in the observed correlation matrix and can be used to reproduce it to a varying degree, but with considerable parsimony. The factor scores are often more reliable than individual items. (3) Both tend to be exploratory in nature.

What are three strategies for increasing power of a test? (Cohen et al., 2003)

(1) To the extent that studies have been carried out by the current or other investigators which are closely similar to the present investigation, the ES found in these studies reflect the magnitude that can be expected. (2) In some research areas an investigator may posit some minimum population effect size that would have either practical or theoretical significance. They may determine that unless Rho = .05, the importance of the relationship is insufficient to warrant a change in the policy or operations of the relevant institution. (3) Deciding what ES values to use in determining the power of a study is to use certain suggested conventional definitions .10, .30, .50, or small, medium, and large (Cohen, 1988).

What are some subjective decision points worth acknowledging that happen in EFA?

(1) Whether to do PCA, EFA, or CFA? (2) What items to use, how many? (3) What type of extraction method to use? (4) What type of rotation to use (oblique/orthogonal, which subtype). (5) How many times to run it and whether to test different models? (6) How many factors to dervie from the model? (7) Which items to keep in the end? (maybe consider basing your choices off Comrey & Lee, 1992) suggestions for loading level strengths).

What are the four requirements for establishing causality? (Cohen et al., 2003)

(1) X precedes Y in time (Temporal precedence). (2) Some mechanism whereby this causal effect operates can be posited (Causal Mechanism). (3) A change in the value of X is accompanied by a change in the value of Y on the average (association or correlation). (4) The effects of X or Y can be isolated from the effects of other protential variables on Y (Non-Spuriousness or Lack of Confounders)

What does it mean when data is Missing Completely At Random (Kline, 1964)?

(Kline, 2016) It means (1) missing observations differ from observed scores only by chance; (2) presence versus absence of data on Y is unrelated to all other variables in the data set. In other words, observed (nonmissing) data are just a random sample of scores that the researcher would have analyzed had the data been complete (Enders, 2010). Results based on complete study cases only should not be biased, although power may be reduced with smaller effective sample size.

What does it mean when data is Missing Not At Random (Kline, 1964)?

(Kline, 2016). Data loss is non-ignorable, and the presence vs. absence of scores on Y depends on Y itself (e.g., patients drop out of study when a particular treatment has unpleasant side effects). Results based on the complete cases only can be severely biased when the data loss pattern is MNAR.

What does it mean when data is Missing At Random? (Kline, 1964)

(Kline, 2016). Property of missingness on Y unrelated to Y itself but is correlated with other variables in the data set; that is, missing data arise from a process that is both measured and predictable in a particular sample (Little, 2013). it is "conditionally" missing at random (Graham & Coffman, 2012). The pattern of missingness is *ignorable* concerning potential bias, but information lost due to MAR process is potentially recoverable through imputation.

What purposes for analysis do EFA & CFA share? (Tabachnick & Fidell, 2013)

(a) to summarize patterns of correlations and covariances among a number of observed variables; (b) to reduce a large number of observed variables to a smaller number of factors that are thought to predict or cause them; (c) to provide an operational definition for underlying processes by means of observed variables; or (d) to test particular theories regarding the underlying factors or processes that influence the results and outcomes of observed indicators (Tabachnick & Fidell, 2013).

What are the conditions for a variable to function as a mediator? (Baron & Kenny, 1986)

(a) variations in level of the IV significantly account for variations in the presumed mediator (i.e., Path a) (b) variations in the mediator significantly accounts for variations in the DV (i.e., path b) (c) when Paths a and b are controlled, a previously significant relationship between the IV and DV is no longer significant.

What are outliers and can they impact the results of a regression? (Cohen et al., 2003)

*Outliers* refer to one or more atypical data points that do not fit well with the rest of the data. When they present, the regression resuls may produce results that strongly reflect the number of atypical cases rather than the general relationship observed in the rest of the data.

What is the difference between a direct effect and an indirect effect in regression? (Cohen et al., 2003)

A direct effect is looking at the direct relationship between two variables (e.g., Y~X). When we look at an indirect effect, we examine the relationship of two variables by means of an intervening one (e.g., Y~X2~X1).

In regression, what is a moderator? (Baron & Kenny, 1986)

A moderator is a qualitative or quantitative variable that affects the direction and/or strength of the relation between an independent or predictor variable and a dependent or criterion variable. ; Within a correlation framework, a moderator is a third variable that affects the zero-order correlation between two other variables. It may also occur if it causes the direction of the correlation to change.

What is pairwise deletion?

A type of available case method where cases are only excluded if they have missing data on variables in a particular analysis. Problems: It's covariance matrix can be a nonpositive definite; it's parameter estimates may be biased; and most importantly, reasonable standard errors cannot be estimated directly with pairwise deletion.

What is listwise deletion?

A type of available case method where subjects with missing scores on any variable are excluded from all analysis (effective sample size includes only cases with complete records). Problems: Estimated quantitaties will not be reflective of whole sample as cases with complete data are very likely different from those with missing data; if the number of cases lost to missing data is small (i.e., < 5%), little statistical power is lost.

What is the definition of a Type I Error (Howell, 2009)?

An incidence where we reject H0 when in fact the null is true. Our chance of this, the conditional probability, is designated by the greek letter Alpha, and it is the size of the rejection area. Thus, the conditionality of a Type I error is that this is the probability of rejecting H0 *given that it is true*.

What is the definition of a Type II error? (Howell, 2009)

An incident where we fail to reject H0, when it is in fact false, and H1 is true. This is represented by the Greek letter Beta.

What is the difference between available case methods and simple-imputation methods, the two classical methods for addressing missing data?

Available case methods involve analyzing data available through deletion of incomplete cases. Simple-imputation methods involve replacing each missing score with a single calculated (imputed) score.

What was the first part of Steve's paragraph on CFA in his practice question?

CFA on the other hand has a number of a prori requirements that must be addressed before analysis can begin. The CFA method requires that the researcher predetermine the number of factors that they hypothesize that the indicators will have pattern coefficients, or factor loadings with (Kline, 2016). Furthermore, the researcher must make specifications of zero and nonzero loadings of the measured indicators on the common factors, due to the nature of the path analytic nature of this procedure (Fabrigar et al., 1999; Kline, 2016). By doing this though, CFA explicitly analyzes restricted measurement models

What is the cross-validation method for assessing shrinkage in regression? (Cascio & Aguinis, 2011)

Cross-validity refers to whether the weights dervied from one sample can predict outcomes to the same degree in the population as a whole or in other samples drawn from the same population.

Why do we want to center our predictors when testing for moderation in a regression? (Cohen et al., 2003)

Doing so has no effect on the estimate of the higher order interaction in the regression equation. Doing so yields two straightforward, meaningful interpretations of each first-order regression coefficient of predictors entered into the regression equation: (1) effects of the individual predictors at the mean of the sample, and (2) average effects of each individual predictors across the range of the other variables. Doing so also eliminates nonessential multicollinearity between first-order predictors and predictors that carry their interactions with other predictors.

What is an operational distinctions of moderators and mediators? (Baron & Kenny, 1986)

From an operational standpoint, moderators and predictors are at the same level in regards to their role as causal variables as they are exogenous to certain criterion effects. They always function as IV. Mediators shift the roles from effects to actual causes, implying that there is a causal order / pathway of effect. Moderators are also something that we can probably manipulate more easily, especially for dichotomous/non-continuous variables.

Does EFA have the ability to specify the exact correspondents between indicators and factors?

Furthermore, EFA *does not have the ability* to specify the exact correspondence between indicators and their potential factors (Kline, 2016). EFA allows indicators to freely load on multiple factors (Fabrigar et al., 1999), and such indicators are allowed to depend on theoretically all factors. Thus, EFA measures unrestricted measurement models. Within this, there are no unique set of statistical estimates derived from the EFA method for any particular multi-factor EFA model, which is due to its mandatory emphasis on rotation, which aids interpretability of factors (Kline, 2016). Finally, the assumption exists that the unique variance of the indicators is not shared amongst one another.

What purpose do CFA & EFA share regarding evaluating variance? (Kline 2016)

Furthermore, both CFA and EFA from a statistical standpoint seek to partition indicator variance (Kline, 2016). Both statistical techniques share a common goal of determining the common variance shared among the indicators, as EFA and CFA both share the assumption that common variance found is due to the factors that influence them. These statistical strategies both seek to calculate this proportion of the common variance shared among the indicators, or their communality (h2), with the assumption that any remaining variance is considered unique variance (U; U = 1 - h2), which is composed of specific variance (variance not explained by any factors in the model) and random measurement error.

What is the Sidak correction?

Given m different null hypotheses and a familywise alpha level of Alpha, each null hypothesis is rejected that has a p-value lower than Alpha-sub-SID = 1 - (1 - Alpha)^(1/m).

In regression, what is a mediator? (Baron & Kenny, 1986)

In general, a given variable may be said to function as a mediator to the extent that it accounts for the relation between the predictor and the criterion. Mediators explain how external physical events take on internal psychological significance. Whereas moderator variables specify when certain effects will hold, mediators speak to how or why such effects occur,

What is the bootstrapping method for assessing mediation in a regression? (Shrout & Bolger, 2002)

Increasingly popular is through testing the indirect effect through boostrapping (Shrout & Bolger, 2002). Bootstrapping is a non-parametric method based on resampling with replacement which is done many times, e.g., 5000 times. From each of these samples the indirect effect is computed and a sampling distribution can be empirically generated. Because the mean of the bootstrapped distribution will not exactly equal the indirect effect a correction for bias can be made. With the distribution, a confidence interval, a p value, or a standard error can be determined. Very typically a confidence interval is computed and it is checked to determine if zero is in the interval. If zero is not in the interval, then the researcher can be confident that the indirect effect is different from zero. Also a Z value can determined by dividing the bootstrapped estimate by its standard error, but bootstrapped standard errors suffer the same problem as the Sobel standard errors and are not recommended. (Bootstrapping does not require the assumption that a and b are uncorrelated.)

What is the Bonferroni correction/procedure?

It is a correction that compensates for the increase in Type I error by testing each individual hypothesis at a significance level of Alpha / m, where Alpha is the desired overall alpha level, and m is the number of hypotheses.

Describe how the collection of additional data can help remedy multicollinearity. (Cohen et al., 2003).

Larger samples always improve precision of estimate of B. Also, you can try and reduce the correlations amont the Ivs, maybe through manipulation in an experimental setting.

How do you diagnose your missing data to see if it is MCAR? (Kline, 2016)

Little & Rubin (2002) MV statistical test of MCAR that compares complete vs. incomplete cases on Y across all other variables. If sig, assume MCAR is rejected. It's a very sensitive test though. Another way is through series of univariate comparisons of the t test of cases that hve missing scores on Y with cases that have complete records of other variables.

What are examples of single-imputation methods?

Mean substitution, group-mean substitution (missing score in particular group is replaced w/ group mean). Both are smiple, but can distort distribution of data by reducing variability.

What is multiple imputation? (Kline, 2016)

Multiple imputation replaces missing score with multiple estimated (imputed) values from a predictive distribution that models the data loss mechanism. Process is repeated so that analysis is actually conducted with multiple versions of imputed data sets. For large data sets, high number of imputed data sets may need to be generated so results can have reasonable precision (Little, 2013). Final set of estimates come from computer synthesizing the results from all replications.

Describe the ridge regression strategy for remedying multicollinearity. (Cohen et al., 2003)

Option for when there is *extremely* high multicollinearity. A constant is added to the variance of each of the IV. This leads to a biased estiamte of each regression coefficient Bi. The estimate is slightly attenauted (too close to 0) so that it is no longer on average qual to the value of Bi* in teh population. However, the estimate of SE-sub-B is substantially reduced.

How does one do the Benjamini-Hochberg Method? (Benjamini & Hochberg, 1995)

Order results in a column by p value. Create a column I, which is index of comparison that simply ranks the p values from lowest to highest (1 being lowest). Then calculate p-crit, which is the critical value for the test. This is (i/k)Alpha, where I is the index, k is the number of tests, and alpha is the desired alpha. Work way down the table if p > p-crit, we retrain the null hypothesis and move on to the next row. As soon as p < p-crit, we reject the null hypothesis and all subsequent ones.

What is the difference between full and partial mediation? (Baron & Kenny, 1986)

Partial mediation is the case in which the path from X to Y is reduced in absoliute size, but is still different from zero when the mediator is introduced. Complete mediation is the case in which variable X no longer affects Y after M has been controlled, making the path c' zero.

What is the purpose of rotation in factor analysis, after extraction?

Rotation is used to improve interpretability and scientific utility of the solution. It is not used to improve the quality of the mathematical fit between the observed and reproduced correlation matrices because all orthogonally rotated solutions are mathematically equivalent to one another and to the solution before rotation. Orthogonal, factors/components are not correlated. Orthogonal solutions offer ease of interpreting, describing, and reporting results; yet they strain "reality" unless the researcher is convinced that underlying processes are almost independent. Oblique rotation allows correlation between factors.

What should we know about CFA and directionality of relationships? (Kline, 2016)

Standard CFA model indicators have two unrelated causes, the factor and the error term. This is reflective measurement in that latent variables are assumed to cause observed variables. Observed variables in reflective measurement models are called effect (reflective) indicators.

What was the second part of Steve's paragraph on CFA in his practice question?

The CFA method does not require rotation for interpretability like EFA requires, as the all loadings have been declared before analysis (Kline, 2016). This does not mean that a researcher cannot specify a complex model, in which an indicator loads on multiple factors. However, doing so increases complexity of the model and makes interpretability more difficult. Similarly, CFA allows for researchers to estimate whether specific variance is shared between any indicators in the form of error correlations.

What is the Sobel's Test method for test the indirect effect (c = c' + ab) of a mediated regression? (Baron & Kenny, 1986)

The Sobel (1982) Test, or the delta method. This test provides an approximate estimate of the standard error of ab. However, the Sobels test has been criticized for being very conservative, and thus having very low power. It is such as the sampling distribution of ab is highly skewed.

What are covariances? (Howell, 2009)

The extent to which two variables vary in relationship with one another.

What are the two types of shrunken R?

The first type of shrinkage is a function of estimating the squared pouplation correlation coefficient rho2 from a sample R2. R2 is not an unbiased estimate. Due to random sampling fluctations, only rarely would we have a case where its r2 with Y will be exactly 0 in a case where one or more IV account for no Y variance in the population. It will virtually always have a positive value, so in most smaples it would make some (possibly trivial) contribution. Samller sample size, greater positive variations. Thus, the greater the inflation to R2. Also, the more IVs we have, the more opportunity for sample R2 to inflate. (2)The second type of shrinkage occurs when we use the regression weights derived from one sample to predict the criterion variable for a new sample drawn from the sample pouplation.

What are the three causal paths for evaluating moderation? (Baron & Kenny, 1986; Cohen et al., 2003)

The intended predictor/IV (Path a), the moderating variable (Path b), and the interaction term, or prodcut, of the two (Path c). Paths a & b can be significant, but c is what we care about. Side note, it's desirable that Z be uncorrelated w/ both X & Y to provide a clearly interpretable interaction term.

How does EFA manage the indicators and factors that are used in analysis?

The method of EFA does not require the a priori specification of the number of theoretical factors to be estimated from the larger set of indicators (Kline, 2016). As EFA seeks to describe and summarize a data set based on their correlations, but does not require that the researcher using it has any particular hypotheses regarding the underlying processes or factors that will bring the data together through communality (Tabachnick & Fidell, 2013). EFA procedures can generate any infinite number of factor possibilities, limited only by the number of indicators, and whether or not the researcher specifically constrains the program to only look for a set number of factors.

What is the relationship between Alpha and Type II errors? (Howell, 2009)

The more stringent we are about the alpha, the more likely we are to get a Type II error. As we we become more stringent about the possibility of rejecting H0 when it is in fact true, it becomes easier to fail to reject H0 when it is in fact false.

What is it that we are referring to when we refer to the "power" of a test? (Cohen et al., 2003)

The power of a test is the probability of rejecting H0 when it is *actually* false. Defined as 1 - Beta

What is the idea of Bonferroni inequality? (Howell, 2009)

The probabiliy of occurrence of one *or more* events can never exceed the sum of their individual probabilities.

You find out that your data is not MCAR. Is it MAR or MNAR? (Klein, 2016)

We can never really be sure which one, as variables may be omitted that may accounted for data loss on Y that is related to Y itself.

In regression, what is multicollinearity? (Cohen et al., 2003)

When IVs become increasingly correlated with a set of other IVs in the regression equation, it will have less and less unique information that it can potentially contribute to the prediction of Y. This can cause some major problems if these correlations are particularly high. Individual regression coefficients can change appreciably in magnitude and even in sign, making them harder to interpret.

In regression, what is the concept of suppression? (Kline, 2016)

When either (a) absolute value of a predictor's beta weight is greater than that of its bivariate correlation with the criterion, or (b) the two have different signs. *Negative suppression* occurs when predictors have positive bivariate correlations with the criterion and with each other, but one receives a negative beta weight in the regression analysis. *Classical suppression* is where one predictor is uncorrelated with the criterion but receives a nonzero beta weight controlling for another predictor. *Reciprocal suppression* is when two variables correlate positively with the criterion but negatively with each other.

What is the point of power analysis? (Cohen et al., 2003)

When the power turns out to be insufficient, the investigator may decide to reverse one's research plans, or even drop the investigation entirely if such revision is impossible. Thus, determining statistical power is of primary value as a preinvestigation procedure. If power is found to be insufficient, the research plan may be to revise it in ways that will increase the power, by increasing n, or increasing the number of levels of variability of the independent variable, or possibly by increasing Alpha.

What are the consequences of either underspecifying or overspecifying a model with too few/many factors? (Fabrigrar et al., 1999)

When underspecified (or too few; Cattell, 1978), substantial error is likely (Wood et al., 1996). Also leads to poor estimates of factor loading, and can lead to two common factors being combined (therefore obscuring true structure). Overfactoring is better. Too many factors though can lead to postulating existence of constructs with little theoretical value and developing unnecessarily complex theories. It all requires balancing parsimony (keeping model simple/few common factors) w/ need for plausibility (sufficient # of common factors to account for correlations among variables).

What are the causal pathways for testing mediator effects? (Baron & Kenny, 1986)

With the model assuming three-variable such that there are two causal paths feeding into the outcome variable: the direct impact of the IV (Path c), and the impact of the mediator (Path b). There is also a path from the IV to the mediator (Path a).

What is the definition of a regression line? (Howell, 2009)

Within a scatterplot, it is the representation of Y being predicted on X, representing best prediction of Yi for a given value of Xi, for the *i*th subject or observation. For specified value of X, the corresponding regression line height represents best prediction of Y.

What is the equation for determining a family-wise error rate?

c = number of comparisons, alpha' = error rate per comparison


Conjuntos de estudio relacionados

Quiz 7 Shock and Neurologic Critical Condition

View Set

Découvertes 3 (série jaune) Unité 5: G14- Präposition und Ländername

View Set

Craven Ch. 7: Values, Ethics, and Legal Issues

View Set