Appendix A
Online supplement to the paper: Smith, D., Harvey, P., Lawn, S., Harris, M., Battersby, M. (2016).Measuring chronic condition self-management in an Australian community: Factor structure of the revised Partners in Health (PIH) scale. Quality of Life Research.
Mplus code for the PIH 4-factor two-level Bayesian structural equation model with cross-loadings N (0, 0.03) presented in Table 4.
------
TITLE: 4-factor two-level BCFA with cross-loadings and covariate (“personwt”)
DATA:File is pih1.dat ;
VARIABLE:NAMES = q2dq3dq4dq5dq6dq7dq8dq9dq10dq11dq12dq13dpersonwt interviewer;
MISSING ARE ALL (-999);
USEVARIABLES = q2d-q13dpersonwt;
CLUSTER = interviewer;
WITHIN = personwt;
DEFINE: STANDARDIZE q2d-q13d;
ANALYSIS:TYPE = TWOLEVEL;
ESTIMATOR = BAYES;
CHAINS = 2;
PROCESSORS = 2;
BITERATIONS = 100000 (15000);
MODEL:%WITHIN%
!Participant level (w)
fkw BY q2d*(L1)
q3(L2)
q4dq5dq6dq7dq8dq9dq10dq11dq12dq13d (xload1-xload10);
fpw BY q4d*(L3)
q5d(L4)
q6d(L5)
q7d(L6)
q2dq3dq8dq9dq10dq11dq12dq13d (xload11-xload18);
frw BY q8d*(L7)
q9d(L8)
q2dq3dq4dq5dq6dq7dq10dq11dq12dq13d (xload19-xload28);
fcw BY q10d*(L9)
q11d(L10)
q12d(L11)
q13d(L12)
q2dq3dq4dq5dq6dq7dq8dq9d (xload29-xload36);
fkw-fcw@1;
q2d-q13d(R1-R12);
fkw-fcw on personwt;
%BETWEEN%
!Interviewer level (b)
fkb BY q2dq3d;
fpb BY q4dq5dq6dq7d;
frb BY q8dq9d;
fmb BY q10dq11dq12dq13d;
MODEL PRIORS: xload1-xload36~N(0,0.01);
OUTPUT: STAND(STDYX);
PLOT:TYPE = PLOT2
MODEL CONSTRAINT: !calculation of omega values
NEW(NUM1-NUM4DENOM1-DENOM4OMEGA1-OMEGA4);
NUM1 = (L1+L2)**2;
DENOM1 = ((L1+L2)**2)+(R1+R2);
OMEGA1 = NUM1/DENOM1;
NUM2 = (L3+L4+L5+L6)**2;
DENOM2 = ((L3+L4+L5+L6)**2)+(R3+R4+R5+R6);
OMEGA2 = NUM2/DENOM2;
NUM3 = (L7+L8)**2;
DENOM3 = ((L7+L8)**2)+(R7+R8);
OMEGA3 = NUM3/DENOM3;
NUM4 = (L9+L10+L11+L12)**2;
DENOM4 = ((L9+L10+L11+L12)**2)+(R9+R10+R11+R12);
OMEGA4 = NUM4/DENOM4;
------
Appendix B
Online supplement to the paper: Smith, D., Harvey, P., Lawn, S., Harris, M., Battersby, M. (2016).Measuring chronic condition self-management in an Australian community: Factor structure of the revised Partners in Health (PIH) scale.Quality of Life Research.
2.4 Statistical analysis
Because individual participants from the same interviewer may have responded in a similar way, design effects (deff) were calculated for a two-level design to assess for any distortion in standard errors: deff = 1 + (cluster size – 1) ICC, where ICC denotes the intraclass correlation coefficient[1-3] . The overall number of participants per interviewer (cluster size) was calculated for an unbalanced design as where is the size of the th cluster, is the total number of clusters, and is the total sample size [4]. For PIH items across 39 interviewers (cluster size = 22.75), 5 of 12 items had deff values of approximately 2 or greater, indicating that clustering may be substantial enough to bias standard errors [1]. Therefore, a model-based multilevel approach was used to analyse survey data to produce unbiased estimates [5]. This method decomposed the total factor variance into a between-interviewer variance component and a within-interviewer variance component [1]. The within-interviewer component allowed marginal or population-averaged inference, akin to the fixed part of a mixed effects model [6,1,7]. The primary theoretical interest was to examine the PIH 4-factor population-average model when disentangled from interviewer-specific variation. The person weight variable was specified at the within-interviewer level.
A Bayesian confirmatory factor analysis (BCFA) approach was used to test convergent and discriminant validity of the revised PIH. The BCFA approach is a compromise between maximum likelihood (ML) CFA and EFA[8]. In a single step analysis, BCFA enables the specification of the prior hypothesized major factor patterns as well as informative small-variance priors for cross-loadings [8]. This minimizes the capitalization on chance that otherwise may occur through a sequence of model modifications when using a ML confirmatory approach for exploratory purposes. Statistical inferences about parameter estimates and model fit are made from a posterior distribution. This is comprised of data based on prior information and the likelihood. BCFA does not rely on large-sample theory and performs better with small-samples compared to ML algorithms [8]. Also, the posterior distribution does not depend on the assumption of normality for inferences about model parameters as it approaches multivariate normality with the increasing influx of data from the same underlying process [8,9]. This accounts for uncertainty in parameter estimates in both the within-level component and between-level component of a multilevel BCFA and adjustments to standard errors and degrees of freedom are not required [10,11].
All analyses were conducted using MPlus software (Version 7.3) (MuthénMuthén, 1998-2012). Bayesian estimation was comprised of two independent Markov Chain Monte Carlo (MCMC) chains. The minimum number of iterations was set at 15,000 and maximum of 100,000 and monitored using the potential scale reduction (PSR) criterion where values less than 1.1 provide evidence for convergence [12]. The subjective assessment of model convergence was carried out by examining posterior parameter trace plots. A well mixing chain is identifiable from a plot that rapidly crosses the posterior distribution with a relatively constant mean and variance. These plots can also be used to identify mode-switching behaviour in the posterior space due to rotational invariance [13]. This may result from PIH variables being allowed to load onto more than one factor in BCFA[13,14]. Rotational invariance is similar to label switching in mixture models and can lead to poor estimates of the posterior distribution [13,15,14].
BCFA was first used to estimate an ‘exact’ PIH factor model using non-informative prior parameters for the hypothesised major loadings and non-target parameters were constrained to zero at within- and between-levels. This model was expected to produce an estimate close to an ML-CFA estimate as the likelihood contributes most of the information [8]. We then repeated the BCFA analysis with the addition of strongly informative priors for cross-loadings at the within-level to ‘approximate’ an exact model. This enabled the isolation of parts of the model that may have contributed to any model misfit [16]. The observed variables were standardized in accordance with the standardized priors. A study sample of Mplus code for an approximate model is provided in Appendix A. To identify a suitable cross-loading prior standard deviation, a range of increasing values (0.03, 0.10 and 0.14) was tested. This allowed the opportunity to assess the degree of change in posterior inferences when other practical probability models were used instead of the present model [12]. These values correspond to 95% cross-loading variations of -0.06 to +0.06, -0.2 to +0.2, and -0.28 to +0.28, respectively. To assess for any further possible improvement in results, cross-loadings found to have 95% credible intervals not covering zero were freely estimated with non-informative priors in an approximate model and compared to an exact model with the same freely estimated cross-loadings. Model fit was evaluated using posterior predictive checking which is analogous to the chi-squared test used in ML-CFA but less sensitive to insignificant model misspecifications [8]. The posterior predictive p-value (PP p) for model fit is based on the median value of the distribution of chi-squared values obtained for model fit using BCFA. It is calculated from the difference in fit statistics between observed and replicated data. A PP p-value greater than 0.05 alongside a 95% confidence interval covering zero can be interpreted as showing acceptable model fit [16]. Final model selection was based on PP p > 0.05 and at the same time being close to an exact BCFA model to better reflect substantive theory.
To determine a better-fitting model between an exact BCFA model and approximate BCFA models with varying informative values, deviance information criterion (DIC) was used. The DIC is somewhat analogous to Akaike information criteria (AIC) but takes into account model complexity via the estimated number of parameters or effective number of parameters pD . This provides an advantage over AIC and Bayesian information criteria (BIC) where small-variance prior parameters are counted as actual parameters [16]. The estimated number of parameters is obtained from pD = where is the posterior mean deviance across all MCMC iterations and is the deviance evaluated at the posterior mean of the parameters [16-18]. Deviance information criterion values are then calculated from DIC = + pD. Lower DIC values imply higher predictive accuracy and may be used in parallel with posterior predictive checks. Because DIC is well known to have a bias towards real model complexity, comparing model fit between a simple 4-factor structure and bi-factor structure was based on PP p-values and the question: was the improvement in fit substantial enough to rationalize the additional model complexity [19,9,20,21]?
References
1. Muthen, B., & Satorra, A. (1995). Complex sample data in structural equation modeling. Sociological methodology, 25, 267-316.
2. Maas, C. J., & Hox, J. J. (2005). Sufficient sample sizes for multilevel modeling. Methodology, 1(3), 86-92.
3. Davis, R. E., Couper, M. P., Janz, N. K., Caldwell, C. H., & Resnicow, K. (2010). Interviewer effects in public health surveys. Health Education Research, 25(1), 14-26.
4. Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York: The Guilford Press.
5. Wu, J.-Y., & Kwok, O.-m. (2012). Using SEM to analyze complex survey data: A comparison between design-based single-level and model-based multilevel approaches. Structural Equation Modeling: A Multidisciplinary Journal, 19(1), 16-35.
6. Rabe-Hesketh, S., & Skrondal, A. (2012). Multilevel and longitudinal modelling using Stata (Third ed.). College Station, Texas: Stata press.
7. Gardiner, J. C., Luo, Z., & Roman, L. A. (2009). Fixed effects, random effects and GEE: what are the differences? Statistics in Medicine, 28(2), 221-239.
8. Muthén, B., & Asparouhov, T. (2012). Bayesian structural equation modeling: a more flexible representation of substantive theory. Psychological Methods, 17(3), 313.
9. Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2014). Bayesian data analysis (third ed.). Boca Raton: Chapman and Hall/CRC Press.
10. Baldwin, S. A., & Fellingham, G. W. (2013). Bayesian methods for the analysis of small sample multilevel data with a complex variance structure. Psychological Methods, 18(2), 151.
11. Depaoli, S., & Clifton, J. P. (2015). A Bayesian approach to multilevel structural equation modeling with continuous and dichotomous outcomes. Structural Equation Modeling: A Multidisciplinary Journal, 22(3), 327-351.
12. Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
13. Erosheva, E. A., & Curtis, S. M. (2011). Dealing with rotational invariance in Bayesian confirmatory factor analysis. Citeseer.
14. Stephens, M. (2000). Dealing with label switching in mixture models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(4), 795-809.
15. Loken, E. (2005). Identification constraints and inference in factor models. Structural equation modeling, 12(2), 232-244.
16. Asparouhov, T., Muthén, B., & Morin, A. J. (2015). Bayesian Structural Equation Modeling with Cross-Loadings and Residual Covariances: Comments on Stromeyer et al. Journal of Management, 41 (6), 1561-1577.
17. Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & Van Der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(4), 583-639.
18. Gelman, A., Hwang, J., & Vehtari, A. (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing, 24(6), 997-1016.
19. Robert, C. P., & Titterington, D. M. (2002). Discussion on 'Bayesian measures of model complexity and fit'. Journal of the Royal Statistical Society Series B (Methodological), B64, 621-622.
20. Ando, T. (2011). Predictive Bayesian Model Selection. American Journal of Mathematical and Management Sciences, 31(1-2), 13-38, doi:10.1080/01966324.2011.10737798.
21. Plummer, M. (2008). Penalized loss functions for Bayesian model comparison. Biostatistics, 9(3), 523-539.