Skip to contents

Exploratory Structural Equation Modeling (ESEM) brings together the strengths in Exploratory Factor Analysis (EFA) and Structural Equation Modeling (SEM) (Asparouhov and Muthén 2009). In its simplest form, it involves using EFA “blocks” as part of a SEM model. These blocks often use either Geomin rotation or target rotations.

Currently, the integration of EFA blocks to make ESEM models is still under development by the team that develops lavaan, a prominent package for CFA and ESEM in R.

Here, we will overcome this difficulty using the ESEM-within-CFA approach, where we first do the EFA part and then use its results as starting values in a CFA model (or a SEM model; see Marsh et al. (2014) for a nice introduction to the topic).

We will use the Holzinger and Swineford (1939) dataset, available in the lavaan package. It shows the results of 301 children in tests of the following cognitive abilities: visual (items x1-x3), textual (x4-x6) and speed (x7-x9). Let’s load the dataset and keep only the item-columns.

#load full data
hw_data <- lavaan::HolzingerSwineford1939
# keep all rows and only the item-columns
hw_data <- hw_data[, c(7:15)]

#take a look
head(hw_data)
#>         x1   x2    x3       x4   x5        x6       x7   x8       x9
#> 1 3.333333 7.75 0.375 2.333333 5.75 1.2857143 3.391304 5.75 6.361111
#> 2 5.333333 5.25 2.125 1.666667 3.00 1.2857143 3.782609 6.25 7.916667
#> 3 4.500000 5.25 1.875 1.000000 1.75 0.4285714 3.260870 3.90 4.416667
#> 4 5.333333 7.75 3.000 2.666667 4.50 2.4285714 3.000000 5.30 4.861111
#> 5 4.833333 4.75 0.875 2.666667 4.00 2.5714286 3.695652 6.30 5.916667
#> 6 5.333333 5.00 2.250 1.000000 3.00 0.8571429 4.347826 6.65 7.500000

EFA blocks

To make an EFA with target rotation, first we need to specify the target rotation matrix. The make_target() function facilitates this. For this function to work, we need to indicate the mapping between factors and their main loadings, that is, which items we expect to load heavily in each factor. This information must be contained in a list where the name of each item is the factor name, and the content is a numeric vector with the column number of the items we expect to load onto that factor.

If we check back the dataset and the factors, we see that the correspondence between factor and item column number is pretty straightforward in this dataset. The first three columns refer to the items in the first factor, the next three columns are the items for the second factor etc. This may not be the case for your dataset! Many scales have items related to different factors interleaved, leading to non-sequential items referring to the same factor. It is also important to remember that the item number for the rotation matrix always refers to the column position of the item in a data.frame comprised only of the item data (recall that we started this vignette making a separate dataset only with the item data). Its lowest number will always be one and the highest number will be the total number of items.

# list with mapping between factors and items
main_loadings_list <- list(visual = c(1:3),
                           textual = c(4:6),
                           speed = c(7:9))
target_rot <- make_target(nitems = 9, mainloadings = main_loadings_list)
target_rot
#>       visual textual speed
#>  [1,]     NA       0     0
#>  [2,]     NA       0     0
#>  [3,]     NA       0     0
#>  [4,]      0      NA     0
#>  [5,]      0      NA     0
#>  [6,]      0      NA     0
#>  [7,]      0       0    NA
#>  [8,]      0       0    NA
#>  [9,]      0       0    NA

NA in the target rotation matrix indicate loadings that shall not be brought closer to zero in the rotation procedure, and zeros indicate otherwise.

One can also easily make a target roation for a bifactor model.

bifactor_target_rot <- make_target(nitems = 9,
                                  mainloadings = main_loadings_list,
                                  bifactor = TRUE)
bifactor_target_rot
#>       visual textual speed  G
#>  [1,]     NA       0     0 NA
#>  [2,]     NA       0     0 NA
#>  [3,]     NA       0     0 NA
#>  [4,]      0      NA     0 NA
#>  [5,]      0      NA     0 NA
#>  [6,]      0      NA     0 NA
#>  [7,]      0       0    NA NA
#>  [8,]      0       0    NA NA
#>  [9,]      0       0    NA NA

However, we will keep the non-bifactor model in this vignette.

Now, for the loading extraction itself with the esem_efa() function. We need to supply the data, the number of factors to be extracted and the target rotation matrix.

# Specify the efa block.
# Note that if we continued with the bifactor model 'nfactors' would then be specified as 4 and not 3 due to the G factor being added

efa_block <- esem_efa(data = hw_data,
                      nfactors = 3,
                      target = target_rot)
#> Loading required namespace: GPArotation
efa_block
#> Factor Analysis using method =  pa
#> Call: psych::fa(r = data, nfactors = nfactors, rotate = targetAlgorithm, 
#>     fm = fm, Target = target)
#> Standardized loadings (pattern matrix) based upon correlation matrix
#>      PA2   PA3   PA1   h2   u2 com
#> x1  0.18  0.08  0.58 0.48 0.52 1.2
#> x2  0.03 -0.09  0.52 0.26 0.74 1.1
#> x3 -0.08  0.08  0.67 0.45 0.55 1.1
#> x4  0.85  0.01  0.02 0.73 0.27 1.0
#> x5  0.89  0.00 -0.06 0.75 0.25 1.0
#> x6  0.80 -0.01  0.08 0.69 0.31 1.0
#> x7  0.04  0.75 -0.24 0.51 0.49 1.2
#> x8 -0.05  0.72  0.03 0.52 0.48 1.0
#> x9  0.01  0.50  0.32 0.46 0.54 1.7
#> 
#>                        PA2  PA3  PA1
#> SS loadings           2.22 1.38 1.26
#> Proportion Var        0.25 0.15 0.14
#> Cumulative Var        0.25 0.40 0.54
#> Proportion Explained  0.46 0.28 0.26
#> Cumulative Proportion 0.46 0.74 1.00
#> 
#>  With factor correlations of 
#>      PA2  PA3  PA1
#> PA2 1.00 0.26 0.34
#> PA3 0.26 1.00 0.31
#> PA1 0.34 0.31 1.00
#> 
#> Mean item complexity =  1.1
#> Test of the hypothesis that 3 factors are sufficient.
#> 
#> The degrees of freedom for the null model are  36  and the objective function was  3.05 with Chi Square of  904.1
#> The degrees of freedom for the model are 12  and the objective function was  0.08 
#> 
#> The root mean square of the residuals (RMSR) is  0.02 
#> The df corrected root mean square of the residuals is  0.03 
#> 
#> The harmonic number of observations is  301 with the empirical chi square  7.87  with prob <  0.8 
#> The total number of observations was  301  with Likelihood Chi Square =  22.54  with prob <  0.032 
#> 
#> Tucker Lewis Index of factoring reliability =  0.963
#> RMSEA index =  0.054  and the 90 % confidence intervals are  0.016 0.088
#> BIC =  -45.95
#> Fit based upon off diagonal values = 1
#> Measures of factor score adequacy             
#>                                                    PA2  PA3  PA1
#> Correlation of (regression) scores with factors   0.94 0.86 0.84
#> Multiple R square of scores with factors          0.89 0.74 0.70
#> Minimum correlation of possible factor scores     0.78 0.48 0.40

The esem_efa() function is actually a wrapper around the psych package function fa() for exploratory factor analysis. All its controls can be supplied for finer specification of the factor extraction procedure. Just be sure to always input the arguments from the original function with name = value pairs. Be sure to check fa()’s documentation for information on those controls and information about all the fields inside the output object.

By default, the target rotation used is an oblique one. The user can switch to an orthogonal target rotation by setting the targetAlgorithm parameter to TargetT. Other alternative is to drop the use of target rotation altogether and use a Geomin rotation instead. In this case, just leave the target parameter alone. One last possible use is available for the case of bifactor models. In this case, just set bifactor = TRUE. Currently, bifactor models are only available with target rotation. The code for each of these cases are commented below.

# orthogonal target rotation
# esem_efa(data = hw_data,
#          nfactors = 3,
#          target = target_rot,
#          targetAlgorithm = "TargetT")

# geomin rotation
# esem_efa(data = hw_data,
#          nfactors = 3)

# bifactor model
# esem_efa(data = hw_data,
#          nfactors = 4,
#          target = bifactor_target_rot,
#          bifactor = TRUE)

Being able to do these factor analyses as in ESEM blocks is nice, but it still lacks the great flexibility and extensibility present in CFA/SEM models. To access those features, we will use the ESEM-within-CFA approach next.

ESEM-within-CFA

While in possession of an EFA done in the ESEM approach, we need just to use the syntax_composer() function to “compose” the ESEM-within-CFA model in lavaan’s syntax. Then, with the syntax, we run the lavaan fit.

syntax_composer() takes as first argument a EFA solution, and as second argument a named list indicating the referents for each factor. Each entry in the list should have the form factor = "item_name". Importantly, this list must be in the same order the factors appear in the factor loadings matrix in the EFA solution. Usually, this will not be the same order you used in the list to create a target rotation, because in the EFA matrix the factors are ordered by the amount of variance explained, not by any order we supply.

For instance, by checking the loadings we can infer that the order in the factor loadings matrix in our example is “textual, speed, visual”. That is not the order we used in make_target()(“visual, textual, speed”).

efa_block$loadings
#> 
#> Loadings:
#>    PA2    PA3    PA1   
#> x1  0.179         0.577
#> x2                0.515
#> x3                0.670
#> x4  0.845              
#> x5  0.886              
#> x6  0.803              
#> x7         0.745 -0.240
#> x8         0.724       
#> x9         0.504  0.317
#> 
#>                  PA2   PA3   PA1
#> SS loadings    2.189 1.354 1.218
#> Proportion Var 0.243 0.150 0.135
#> Cumulative Var 0.243 0.394 0.529

When checking the loadings matrix we can also pick which is the best referent for each factor. It should always be an item that loads heavily on one factor and poorly on the others. So, for factor “textual” the referent will be x5, for speed it will be x8 and for visual x3. We will create the list with them in this order.

# create named character vector of referents
hw_referents <- list(textual = "x5",
                     speed = "x8",
                     visual = "x3")

Alternatively, it is possible to use the find_referents() function for automated selection of referents. The inputs are the result from the esem_efa()function and a character vector with the desired names for the factors. Once again, the names must refer to the factors in the order they appear in the exploratory solution.

find_referents(efa_block, c("textual", "speed", "visual"))
#> $textual
#> [1] "x5"
#> 
#> $speed
#> [1] "x7"
#> 
#> $visual
#> [1] "x3"

It should be noted that the referents chosen by the function are not exactly the same as the ones we chose by inspecting the factor loadings, the referent for speed differs. This happens because the current implementation of find_referents() searches only for the highest loading item on each factor, with no regard for how well the item loads on other factors.

Finally, we compose the lavaan syntax with syntax_composer:

# compose lavaan syntax
model_syntax <- syntax_composer(efa_object = efa_block,
                                referents = hw_referents)

# altenatively, if you plan fit the model with free factor variance parameters
#model_syntax_free_var <- syntax_composer(efa_object = efa_block,
#                                referents = hw_referents,
#                                only_fix_crossloadings = FALSE)

Laavan’s model syntaxes are nothing more than an (often) long string. The best way to see the resulting syntax is with writeLines():

writeLines(model_syntax)
#> textual =~ start(0.179)*x1+
#> start(0.03)*x2+
#> -0.082*x3+
#> start(0.845)*x4+
#> start(0.886)*x5+
#> start(0.803)*x6+
#> start(0.037)*x7+
#> -0.049*x8+
#> start(0.014)*x9 
#> 
#> speed =~ start(0.081)*x1+
#> start(-0.085)*x2+
#> 0.075*x3+
#> start(0.01)*x4+
#> 0.003*x5+
#> start(-0.006)*x6+
#> start(0.745)*x7+
#> start(0.724)*x8+
#> start(0.504)*x9 
#> 
#> visual =~ start(0.577)*x1+
#> start(0.515)*x2+
#> start(0.67)*x3+
#> start(0.016)*x4+
#> -0.064*x5+
#> start(0.08)*x6+
#> start(-0.24)*x7+
#> 0.035*x8+
#> start(0.317)*x9

We can confirm that the each factor has two fixed parameters (the cross-loadings from the other factors) and all other parameters have the loadings from the EFA as starting points. We are ready to run a CFA in lavaan with this syntax. Alternatively, you can simply copy the syntax and run the model in a lavaan-powered point-and-click software like JASP or JAMOVI (with the SEM module). Just be sure to check you are using the parametrization that corresponds to the syntax you composed.

cfa_fit <- lavaan::cfa(model = model_syntax, data = hw_data, std.lv =T)
lavaan::summary(cfa_fit, fit.measures = TRUE, std = TRUE)
#> lavaan 0.6-10 ended normally after 28 iterations
#> 
#>   Estimator                                         ML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        33
#>                                                       
#>   Number of observations                           301
#>                                                       
#> Model Test User Model:
#>                                                       
#>   Test statistic                                22.897
#>   Degrees of freedom                                12
#>   P-value (Chi-square)                           0.029
#> 
#> Model Test Baseline Model:
#> 
#>   Test statistic                               918.852
#>   Degrees of freedom                                36
#>   P-value                                        0.000
#> 
#> User Model versus Baseline Model:
#> 
#>   Comparative Fit Index (CFI)                    0.988
#>   Tucker-Lewis Index (TLI)                       0.963
#> 
#> Loglikelihood and Information Criteria:
#> 
#>   Loglikelihood user model (H0)              -3706.541
#>   Loglikelihood unrestricted model (H1)      -3695.092
#>                                                       
#>   Akaike (AIC)                                7479.081
#>   Bayesian (BIC)                              7601.416
#>   Sample-size adjusted Bayesian (BIC)         7496.758
#> 
#> Root Mean Square Error of Approximation:
#> 
#>   RMSEA                                          0.055
#>   90 Percent confidence interval - lower         0.017
#>   90 Percent confidence interval - upper         0.089
#>   P-value RMSEA <= 0.05                          0.365
#> 
#> Standardized Root Mean Square Residual:
#> 
#>   SRMR                                           0.017
#> 
#> Parameter Estimates:
#> 
#>   Standard errors                             Standard
#>   Information                                 Expected
#>   Information saturated (h1) model          Structured
#> 
#> Latent Variables:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>   textual =~                                                            
#>     x1                0.218    0.080    2.728    0.006    0.218    0.187
#>     x2                0.052    0.084    0.611    0.541    0.052    0.044
#>     x3               -0.082                              -0.082   -0.073
#>     x4                0.971    0.062   15.657    0.000    0.971    0.836
#>     x5                1.138    0.063   18.018    0.000    1.138    0.883
#>     x6                0.878    0.058   15.034    0.000    0.878    0.803
#>     x7                0.030    0.078    0.391    0.696    0.030    0.028
#>     x8               -0.049                              -0.049   -0.048
#>     x9                0.023    0.064    0.359    0.720    0.023    0.023
#>   speed =~                                                              
#>     x1                0.080    0.087    0.917    0.359    0.080    0.069
#>     x2               -0.105    0.093   -1.128    0.259   -0.105   -0.089
#>     x3                0.075                               0.075    0.066
#>     x4                0.006    0.062    0.104    0.917    0.006    0.006
#>     x5                0.003                               0.003    0.002
#>     x6               -0.008    0.060   -0.139    0.889   -0.008   -0.008
#>     x7                0.800    0.098    8.140    0.000    0.800    0.736
#>     x8                0.737    0.069   10.647    0.000    0.737    0.729
#>     x9                0.503    0.068    7.394    0.000    0.503    0.500
#>   visual =~                                                             
#>     x1                0.689    0.088    7.834    0.000    0.689    0.591
#>     x2                0.597    0.093    6.413    0.000    0.597    0.508
#>     x3                0.759    0.077    9.829    0.000    0.759    0.672
#>     x4                0.043    0.067    0.646    0.518    0.043    0.037
#>     x5               -0.064                              -0.064   -0.050
#>     x6                0.101    0.063    1.605    0.109    0.101    0.093
#>     x7               -0.236    0.100   -2.355    0.019   -0.236   -0.217
#>     x8                0.035                               0.035    0.035
#>     x9                0.318    0.072    4.384    0.000    0.318    0.315
#> 
#> Covariances:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>   textual ~~                                                            
#>     speed             0.258    0.087    2.977    0.003    0.258    0.258
#>     visual            0.303    0.092    3.307    0.001    0.303    0.303
#>   speed ~~                                                              
#>     visual            0.307    0.115    2.670    0.008    0.307    0.307
#> 
#> Variances:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>    .x1                0.696    0.087    8.038    0.000    0.696    0.513
#>    .x2                1.035    0.102   10.151    0.000    1.035    0.749
#>    .x3                0.692    0.097    7.134    0.000    0.692    0.543
#>    .x4                0.377    0.048    7.902    0.000    0.377    0.279
#>    .x5                0.403    0.061    6.590    0.000    0.403    0.243
#>    .x6                0.365    0.042    8.613    0.000    0.365    0.305
#>    .x7                0.594    0.106    5.624    0.000    0.594    0.502
#>    .x8                0.479    0.080    5.958    0.000    0.479    0.469
#>    .x9                0.551    0.060    9.132    0.000    0.551    0.543
#>     textual           1.000                               1.000    1.000
#>     speed             1.000                               1.000    1.000
#>     visual            1.000                               1.000    1.000

If you need to fit a model with free factor (residual) variances you’ll need to use the function fit_free_factor_var_esem(). This function is a wrapper around the lavaan() function with the same parameters set in the cfa() function, except that the factor variances are free to be estimated and the first indicators in each factor are not automatically fixed. We assume the identification is granted by the fixed referents in the model syntax, which should be the case if you set only_fix_crossloadings = FALSE when composing the syntax with syntax_composer.

# cfa_fit <- fit_free_factor_var_esem(model_syntax_free_var, hw_data)
# lavaan::summary(cfa_fit, fit.measures = TRUE, std = TRUE)

McDonald’s Omega

We can calculate McDonald’s omegas using the fitted model and target rotation matrix. Currently, the use of a target rotation matrix is mandatory.

omega_esem(cfa_fit, target_rot)
#>  visual textual   speed 
#>   0.635   0.885   0.718

Extending the model

To modify or extend the model we just need to add more information to the model syntax we already have. Let’s say we want to estimate the residual covariance between two items, we could do:

# lavaan syntax for "x3 covariates with x4"
mod_extension <- "x3 ~~ x4"
extended_model_syntax <- paste(model_syntax, mod_extension, sep = "\n")
writeLines(extended_model_syntax)
#> textual =~ start(0.179)*x1+
#> start(0.03)*x2+
#> -0.082*x3+
#> start(0.845)*x4+
#> start(0.886)*x5+
#> start(0.803)*x6+
#> start(0.037)*x7+
#> -0.049*x8+
#> start(0.014)*x9 
#> 
#> speed =~ start(0.081)*x1+
#> start(-0.085)*x2+
#> 0.075*x3+
#> start(0.01)*x4+
#> 0.003*x5+
#> start(-0.006)*x6+
#> start(0.745)*x7+
#> start(0.724)*x8+
#> start(0.504)*x9 
#> 
#> visual =~ start(0.577)*x1+
#> start(0.515)*x2+
#> start(0.67)*x3+
#> start(0.016)*x4+
#> -0.064*x5+
#> start(0.08)*x6+
#> start(-0.24)*x7+
#> 0.035*x8+
#> start(0.317)*x9 
#> 
#> x3 ~~ x4

The formula paste(model_syntax, mod_extension, sep = "\n") can be used iteratively to progressively add extensions to the model, or one can write several extensions between quotes and add them to the model in one go.

After extending the model, the new syntax can be used to fit a new model with lavaan’s cfa() or sem() (if your extensions include regressions).

extended_cfa_fit <- lavaan::cfa(extended_model_syntax, hw_data, std.lv = T)
lavaan::summary(extended_cfa_fit, fit.measures = TRUE, std = TRUE)
#> lavaan 0.6-10 ended normally after 28 iterations
#> 
#>   Estimator                                         ML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        34
#>                                                       
#>   Number of observations                           301
#>                                                       
#> Model Test User Model:
#>                                                       
#>   Test statistic                                22.535
#>   Degrees of freedom                                11
#>   P-value (Chi-square)                           0.021
#> 
#> Model Test Baseline Model:
#> 
#>   Test statistic                               918.852
#>   Degrees of freedom                                36
#>   P-value                                        0.000
#> 
#> User Model versus Baseline Model:
#> 
#>   Comparative Fit Index (CFI)                    0.987
#>   Tucker-Lewis Index (TLI)                       0.957
#> 
#> Loglikelihood and Information Criteria:
#> 
#>   Loglikelihood user model (H0)              -3706.360
#>   Loglikelihood unrestricted model (H1)      -3695.092
#>                                                       
#>   Akaike (AIC)                                7480.719
#>   Bayesian (BIC)                              7606.761
#>   Sample-size adjusted Bayesian (BIC)         7498.933
#> 
#> Root Mean Square Error of Approximation:
#> 
#>   RMSEA                                          0.059
#>   90 Percent confidence interval - lower         0.022
#>   90 Percent confidence interval - upper         0.094
#>   P-value RMSEA <= 0.05                          0.297
#> 
#> Standardized Root Mean Square Residual:
#> 
#>   SRMR                                           0.017
#> 
#> Parameter Estimates:
#> 
#>   Standard errors                             Standard
#>   Information                                 Expected
#>   Information saturated (h1) model          Structured
#> 
#> Latent Variables:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>   textual =~                                                            
#>     x1                0.229    0.081    2.838    0.005    0.229    0.196
#>     x2                0.059    0.085    0.692    0.489    0.059    0.050
#>     x3               -0.082                              -0.082   -0.073
#>     x4                0.978    0.063   15.556    0.000    0.978    0.841
#>     x5                1.136    0.063   17.951    0.000    1.136    0.882
#>     x6                0.879    0.058   15.098    0.000    0.879    0.804
#>     x7                0.027    0.079    0.346    0.729    0.027    0.025
#>     x8               -0.049                              -0.049   -0.048
#>     x9                0.027    0.064    0.416    0.677    0.027    0.026
#>   speed =~                                                              
#>     x1                0.079    0.087    0.909    0.363    0.079    0.068
#>     x2               -0.108    0.093   -1.164    0.244   -0.108   -0.092
#>     x3                0.075                               0.075    0.066
#>     x4                0.012    0.062    0.192    0.848    0.012    0.010
#>     x5                0.003                               0.003    0.002
#>     x6               -0.008    0.060   -0.132    0.895   -0.008   -0.007
#>     x7                0.809    0.100    8.123    0.000    0.809    0.744
#>     x8                0.734    0.069   10.650    0.000    0.734    0.726
#>     x9                0.503    0.068    7.388    0.000    0.503    0.499
#>   visual =~                                                             
#>     x1                0.680    0.088    7.768    0.000    0.680    0.584
#>     x2                0.598    0.093    6.429    0.000    0.598    0.509
#>     x3                0.759    0.077    9.819    0.000    0.759    0.672
#>     x4                0.022    0.074    0.302    0.763    0.022    0.019
#>     x5               -0.064                              -0.064   -0.050
#>     x6                0.100    0.063    1.584    0.113    0.100    0.091
#>     x7               -0.247    0.101   -2.438    0.015   -0.247   -0.227
#>     x8                0.035                               0.035    0.035
#>     x9                0.316    0.072    4.360    0.000    0.316    0.314
#> 
#> Covariances:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>  .x3 ~~                                                                 
#>    .x4                0.028    0.046    0.611    0.541    0.028    0.056
#>   textual ~~                                                            
#>     speed             0.260    0.087    2.980    0.003    0.260    0.260
#>     visual            0.297    0.092    3.214    0.001    0.297    0.297
#>   speed ~~                                                              
#>     visual            0.313    0.115    2.722    0.006    0.313    0.313
#> 
#> Variances:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>    .x1                0.701    0.086    8.112    0.000    0.701    0.516
#>    .x2                1.031    0.102   10.127    0.000    1.031    0.746
#>    .x3                0.691    0.097    7.105    0.000    0.691    0.542
#>    .x4                0.375    0.048    7.778    0.000    0.375    0.277
#>    .x5                0.407    0.061    6.634    0.000    0.407    0.245
#>    .x6                0.366    0.042    8.614    0.000    0.366    0.306
#>    .x7                0.585    0.107    5.474    0.000    0.585    0.494
#>    .x8                0.483    0.080    6.071    0.000    0.483    0.473
#>    .x9                0.550    0.060    9.126    0.000    0.550    0.542
#>     textual           1.000                               1.000    1.000
#>     speed             1.000                               1.000    1.000
#>     visual            1.000                               1.000    1.000

Check lavaan’s lavaanify() documentation to learn about syntaxes for model extension (covariances, regressions and much more).

Export results

Result information from the summary() may be exported to a text file with the export_lavaan_results() function. It is possible to add a preamble to enrich the information in the output.

Since the function runs lavaan’s summary() before dumping the results, one can add extra results such as modification indices and r-squared measures by including the corresponding modifiers to the call (all possible modifiers are listed in the lavaan’s lavaan-class help).

my_preamble <- '
ESEM on the classic Holtzinger Swineford data.

Referents chosen after target rotate EFA with 
Principal Axis extraction.
'

export_lavaan_results(extended_cfa_fit, preamble = my_preamble, rsquare = TRUE)

This saves a “lavaan_summary.txt” file to my current working directory with the preamble in my_preamble at the top. I also added the rsquare modifier to ensure it shows on the results. If I wished to save the file somewhere else I would pass the full path as path_name, something like “C:/directory_one/another_directory/my_desired_file_name.txt”. Irrespective of operating system the path must be written with forward /, not backward, slashes. The path must end with a file name with a text extension.

References

Asparouhov, Tihomir, and Bengt Muthén. 2009. “Exploratory Structural Equation Modeling.” Structural Equation Modeling: A Multidisciplinary Journal 16 (3): 397–438. https://doi.org/10.1080/10705510903008204.

Marsh, Herbert W., Alexandre J. S. Morin, Philip D. Parker, and Gurvinder Kaur. 2014. “Exploratory Structural Equation Modeling: An Integration of the Best Features of Exploratory and Confirmatory Factor Analysis.” Annu. Rev. Clin. Psychol. 10 (1): 85–110. https://doi.org/10.1146/annurev-clinpsy-032813-153700.