Learning Community Survey (LCS) Validity and Reliability Study

Donna Braun, Ed.D.

Center for Leadership and Educational Equity

Robert K.Gable, Ed.D.

Center for Research and Evaluation, College of Arts & Sciences

Johnson & Wales University

Felice D. Billups, Ed.D.

Center for Research and Evaluation, College of Arts & Sciences

Johnson & Wales University

Citation: Braun, D., Gable, R., & Billups, F. (2015). Learning Community Survey Validity and Reliability Study. Center for Leadership and Educational Equity:

Introduction

This document contains a description of the Learning Community Survey. It also presents evidence to support content and construct validity of data interpretations and the internal consistency alpha reliabilities of the data for the items defining each dimension assessed.

Survey Description

The Learning Community Survey contained 35 items to obtain educators’ opinions in six domains of leadership practice: Reorganizing Systems (5 items), Setting Direction (6 items) , Monitoring Progress (5 items), Building Capacity to Teach (5 items), Building Capacity to Collaborate (7 items), and Building Capacity to Distribute Leadership (7 items). The items were rated on a 4-point Likert agreement scaleranging from 1= Strongly Disagree to 4=Strongly Agree.

Sample

The survey respondents were N=154 teachers from seven schools.

Validity

Content Validity

Validity evidence based on the survey content was supported through qualitative phase of the study (interviews and focus groups across 5 schools) and supported by research.

Construct Validity

Support for the construct validity of the interpretations of the data from the seven LCS dimensions was examined using Mplus, Version 7.3 to run a confirmatory factor analysis – CFA (McCoach, Gable, & Madura, 2013). This analysis allowed the researchers to examine whether the pattern of relationships among the items could be explained by the hypothesized seven factor model employed in the LCS. In other words, do the data relationships fit the literature-based specified LCS factor structure?

Table 1 contains the confirmatory factor analysis measurement weights (loadings) for the items defining each dimension. All of the loadings were sufficiently high to support their dimension/item assignment with the exception of three items: Setting Direction item SD4 (.46) – The students who need the most support are prioritized; Building Capacity to Distribute Leadership item CDL1 (.44) – I model the attitude and practices I hope to see in the adults in my school and item CDL4 (.48) – I plan effective agendas for group conversations. While these were the weakest items, the overall CFA specified model fit statistics were supportive of the construct validity of the LCS data interpretations. The χ2value was 1065.81(df =545, p< .001). So that the fit analysis was not judged soley on the chi-square statistics, the fit statistics not effected by sample size were calculated. The root-mean-square error of approximation (RMSEA) was .079; the Tucker-Lewis index (TLI) was .869; the Comparative Fit Index was .880. While we would prefer these fit statistics to be better, they are adequate to support a reasonable model fit so that the alpha reliabilities can be developed.

Alpha Reliabilities

Table 2 contains Cronbach’s alpha internal consistency reliabilities for the data from the items defining each of the six LCS dimensions. The reliabilities ranged from .72 for Monitoring Progress to .82 for Building Capacity to Collaborate. These levels of reliability allowed us to develop dimension-level means to be used in future data analyses.