SEM - Part 1

Ace your homework & exams now with Quizwiz!

Model Development Approach

A model is tested using SEM, then model may be deficient somewhere and an alternative model is suggested based on Modification Indices. This is the most common method. There is a downside to this approach because "developed" models can be unstable and hard to replicate because you are capitalizing on chance in your data set.

Nonrecursive Model

A nonrecursive model is a structural model where there is a feedback loop between two constructs

Interaction

A significant effect of the moderator variable on another relationship is called an interaction. A moderator variable can be qualitative (Male -Female) or quantitative (high involvement score-low involvement score). A moderator variable will usually strengthen or weaken the relationship between other constructs.

Full vs. Partial Invariance

Partial invariance is where multiple estimates per construct need to be equivalent (but not all). Based on Hair et al. (2006)if two parameters per construct are found to invariant, then partial invariance is found and the research can proceed. If full invariance is not supported, the researcher can systematically "free" the constraints on each factor that have the greatest impact on chi-square to achieve a nonsignificant metric invariance. Note: you cannot have a partial configural invariance- this applies to metric invariance, scalar invariance, and factor covariance invariance.

Harman's Single Factor Test

Perform a CFA and have all your items load on one construct. If you have an acceptable model fit, then you have a method bias. Researchers have questioned this approach and have concluded that this is an unreliable test for common method bias.

Summer 2001

Reasons for article rejections.

Grapentine 2001

Response to Drolet and Morrison 2001... Oh yes we do need multiple measures!!

Iacobucci 2009

SEM is no causal. only covariance

McQuitty 2004

Sample Size Guidelines

Huber 2008

Value of "Sticky Articles" - Think SUCCES S_Simple, U_unexpected, C_concrete, C_credible, E_emotional, S_story Make sure there new knowledge for the field.

Confirmatory Factor Analysis

Confirmatory Factor Analysis is a statistical technique that analyses a measurement model in which both the number of factors and indicators are explicitly specified.

Hu & Bentlers 1999

Cut off criteria! Get your cut off criteria right here!

Moderation

Moderation occurs when the relationship between two constructs is influenced by a third variable.

Alternative Models Approach

more than one theoretical model is suggested and you test to see which one has a superior fit. Most of the time researchers are hard pressed to find two alternative models supported by the literature.

Recursive model

recursive model is what we would consider a normal structural model where no feedback loops exist. The problem with feedback loops is you are creating an infinite loop in estimating the parameters between the feedback variables.

Marker-Variable Technique

researcher introduces a variable in a survey that is theoretically unrelated to another construct in your study. This mole variable is called the marker variable. You have to plan ahead because this variable is included in your initial data collection. CMB is assessed by examining the correlation between the marker variable and the unrelated variables of your study. Theoretically, the correlation between your marker variable and the variables of the study should be low - if not you are going to run into a positive method bias test. To determine if bias is present, you need to partial out the marker variable correlation from all the other correlations between constructs. Here is how to get the adjusted correlation

Nunnally and Bernstein 1994

state that an acceptable level of reliability is a Cronbach's alpha that is greater than .70.

Common Method Bias

the inflation (or in rare cases deflation) of the true correlation among observable variables in a study. Research has shown that because respondents are replying to survey questions about independent and dependent variables at the same time this can artificially inflate the covariation (which lead to biased parameter estimates).

Strictly Confirmatory Approach

the researcher has a single model that is accepted or rejected based on its correspondence to the data. (All or Nothing approach) This is a very narrow approach and is rarely used.

Content Validity -

(face validity) do the items represent the construct of interest? Are you capturing the unobserved variable? With a sufficient number of items, a researcher can make this argument. If it is a 14 new construct and the researcher includes a small number of items (like two), you set yourself up for criticism that you may not have achieved content validity.

Predictive Validity

- does the construct actually predict what it is supposed to be predicting

Drawbacks of Cronbach's Alpha

1) it is inflated when the construct has a large number of indicators 2) it assumes that all indicators have equal influence.

Assumptions of SEM

A) Multivariate Normal Distribution of the Indicators B) SEM is primarily for Interval Data C) Maximum Likelihood Estimation is the default method—can use others though D) SEM assumes a complete data set- more to come on this topic later if you have missing data E) SEM assumes Linear Relationships between indicator and latent variables F) Multicollinearity is not present G) Adequate Sample Size

What do I do if model is Under-Identified?

A) Reduce the number of path estimates or covariances B) Add more exogenous (independent) variables

How is a CFA different than an EFA?

An Exploratory Factor Analysis (EFA) is useful in data reduction of a large number of items along with determining the number of factors. EFA is typically conducted with correlation matrices which are problematic in comparing parameters across samples. CFA uses a covariance matrix and allows for comparison across samples. EFA needs to consider rotation to get a good fit where CFA does not worry about rotation because you are denoting the specific items that are loading on a construct. With a CFA, no measured item is allowed to load on more than one factor. In an EFA all measurement items are loading on all factors.

Bootstrapping

Bootstrapping is a computer based method of resampling in which your sample is treated like a pseudo-population. Cases from the original sample are randomly selected with replacement to generate other data sets. Because of sampling with replacement, the same case can appear in more than one generated data set. You will generate numerous bootstrapping samples (good idea to have at least 2,000). These bootstrapped samples will simulate the drawing of numerous random samples from a population. The bootstrap sample will then determine the direct and indirect effects of relationship and also give you a confidence interval to determine if your indirect effects fall in the confidence interval.

Steemkamp & Baumgartner 2000

Focus on structural relationships 1. SEM not solution to ALL modeling problems 2. SEM stronger in testing theory than decision making 3. SEM unique in that it accounts for measurement error 4. SEM useful for testing hypotheses in cross-sectional data with multiple constructs and indicators 5. SEM can be used to examine Non-linear effects 6. SEM can be used to study Heterogeneous relationships 7. SEM can be used for Longitudinal data

Identification with SEM Models

Identification in regards to a SEM model deals with if there is enough information to identify a solution (or in this instance estimate a parameter). An underidentified model means that there are more parameters to be estimated in your model than elements in the covariance matrix.

If you have collected data at two points in time and want to see the difference between the two values...

If you have collected data at two points in time and want to see the difference between the two values, then you just need to do a two group comparison and examine the differences. If you are trying to determine the change (growth) of a data point with at least three points in time, then you more than likely need to perform a latent growth curve model.

Rigdon (1994)

Introduced the equation for calculating degrees of freedom in a CFA. df = m * (m+1)/2 - 2*m - ε * (ε - 1)/2 M = number of indictors ε =number of latent constructs

How do I know which parameter to free in a partial invariance test?

It is a little bit of trial and error. You can look at the individual factor loadings for each group and see where you have big differences across the groups. Again, it's is a little trial and error. It may just be one construct causing the significant chi-square diff

Measurement Model Invariance across Groups

Measurement model invariance testing is done to determine if the factor loadings of indicators do not differ across groups.... This test is to determine if your items are actually measuring the same thing across groups.

Fornell & Larker 1981

Method of determining convergent and discriminant validity. AVE of over .5 denote convergent validity. If the correlation between the CONSTRUCTS, not items, is below the AVE you have discriminant validity.

Metric Invariance

Metric invariance establishes the equivalence of the basic "meaning" of the construct because the loadings denote the relationship between indicators and the latent construct

Sample Size - How much is enough?

One of the most common suggestions for sample size is Nunnally and Bernstein's (1994) rule of 10. The rule of 10 states that you should have 10 observations for each indicator in your model. Another rule of thumb, based on Stevens (1996), is to have at least 15 cases per indicator. Overall, there are numerous cites (Garver and Mentzer 1999; Hoelter 1983) denote a "critical sample of size of at least 200 which is understood to provide sufficient statistical power for your data analysis.

Cronbach's Alpha

One of the most popular techniques for assessing reliability with measures is to calculate Cronbach's Alpha (also called coefficient alpha -- α). This analysis measures the degree to which responses are consistent across the items within a construct (internal consistency).

Steps of the Sobel Test - Baron & Kenny (1986)

Step 1 - Make sure that X has an influence on Y (absent of M at this point). Even if this is insignificant you can still proceed to step 2. Step 2 - test that X has an influence on M (no Y included) - needs to be significant for mediation Step 3 - test that X has an influence on M and that M has an influence on Y - needs to significant Step 4 - Test the direct and indirect relationships simultaneously and determine if you have significance in the indirect effects.

What are the 2 Approaches SEM normally takes?

Strictly Confirmatory Approach Alternative Model Approach Model Development Approach

Convergent Validity

The idea that a set of items are presumed to measure the "same" construct. All the items "converge" on that construct.

The Main Advantages of Structural Equation Modeling

The main advantage of structural equation modeling is that 1) it lets you analyze the influence of predictor variables on numerous dependent variables simultaneously and 2) it also accounts for measurement error. In contrast, regression models to not partial measurement error from error attributed to lack of model fit.

Measurement Model vs. Structural Model

The measurement model is where the researcher is going to assess the validity of the measures for each construct. After showing the validity of the measurement model, the research will proceed to the structural model. The structural model is concerned with the influence and significance between constructs.

Latent Common Method Factor

The most popular way to handle common method bias is to include a common method factor in your CFA. A common method factor is a latent variable that has a direct relationship with each construct's indicators. The common method factor will represent variance across constructs (due to the potential methods bias). You will first model your CFA, then you will include a latent variable in the model. Label this variable "common method" or whatever you want to in order to remember this is the common methods construct. Then start making relationships from the common method variable to each constructs' indicators

How to test for Mediation

The research by Barron and Kenny (1986) outlined how to test for mediation and has been widely used as the standard to use in mediation testing over the last 25 years. Recently, this method has been question and we will talk about other ways to test for mediation but you need to understand how Barron and Kenny suggest testing for mediation because of its prevalence of use in academic research.

Root Mean Square Error of Approximation (RMSEA)

This is a "badness of fit" test where values close to zero equal the best fit. There is a good model fit if RMSEA is below .05. T

Scalar Invariance

This is where you are going to constrain the measurement intercepts (means) and compare to the unconstrained model. In AMOS, that constrained model is called "measurement intercepts".

Comparative Fit Index (CFI)

This test compares the covariance matrix predicted by your model to the observed covariance matrix of the null model. CFI varies from 0 to 1 - a CFI value close to one indicates a good fit. The cut off for a good fit for a CFI value is >.90 - indicating that 90% of the covariation in the data can be reproduced by your model. CFI is not affected by sample size and a recommend fit statistic to report.

NNFI

Tucker Lewis Index (TLI) also called the non-normed fit index. It is calculated by (chi-square for the null model - chi-square for your model) / (chi-square for the null model / degrees of freedom for your model - 1). Like the other fit indices above .90 equals a good fit.

Drolet & Morrison 2001

We really don't need multiple item measures. Less is more.

Raykov's Rho(ρ)

also called composite reliability. (This is not Spearman's Rho) This is a reliability based on factor loadings. More details on how to calculate this reliability is in the CFA section. This reliability has the same cutoff criteria as Cronbach's alpha for acceptable level of reliability >.70.

Construct Validity

assessment of a construct and its measures that include content, predictive, convergent and discriminant validity. It also includes reliability and unidimensionality.

Relative Fit Index (RFI)

calculated by (chi-square for your model / degrees of freedom for your model) / (chi-square for the null model / degrees of freedom for your model). RFI close to 1indicates a good fit. Acceptable fit is .90 and above.

Calculating Degrees of Freedom for Structural Model

df = m * (m+1)/2 - 2*m - ε * (ε - 1)/2 - g - b M = number of indictors ε =number of latent constructs g = structural relationships from independent constructs to dependent constructs b = structural relationships from dependent constructs to dependent constructs


Related study sets

HIS Learning Unit 7 (Chapter 22)

View Set

Psychology 101 chapter 15 Myers - 12th edition

View Set

fin 240 kaplowitz worksheet 25.1: types of negotiable instruments

View Set

APEX: Neuro - Spinal Cord, APEX: Neuro - Brain, APEX: Regional UE, Unit 8 Regional Flashcards (Lower Extremity) Apex, Upper extremity blocks, Lower Extremity Regional Apex, APEX Regional Flashcards (Upper Extremity), APEX Regional Flashcards (Neuraxi...

View Set