Project Great Start Professional Development Initiative

Evidence from 2006-2007

Susan B. Neuman

Intervention

The intervention constituted a 45-hour three-credit course in language and literacy held at one of four local community colleges, closest in proximity to the child care site. For those randomly selected, a year-long coaching intervention occurred in addition to the professional development course. Each intervention is described below.

Language and Literacy Course

In collaboration with our staff, community colleges developed a course in early language and literacy. The course was designed to provide students with content knowledge considered by experts to be essential for quality early language and literacy practice.

Course content was based on a set of core competencies that reflected accreditation standards from the National Association for the Education of Young Children (NAEYC), the International Reading Association (IRA), and the state licensing requirements (See Appendix). These core competencies were aligned to measures of quality early childhood practices including the Early Childhood Environmental Rating Scale (ECERS) (Harms, Clifford, & Cryer, 1998), the Family Day Care Rating Scale (FDCRS) (Harms, Cryer, & Clifford, 2007), and the Early Language and Literacy Classroom Observation (ELLCO) (Smith & Dickinson, 2002). For each core competency, the research base was identified, along with citations, and helpful articles for student reading.

A course syllabus was developed to align with these core competencies at each community college. Although the course varied in assignments, and in-class activities at each location, all focused on developing knowledge in research-based core competencies that included oral language comprehension, phonological awareness, print concepts, strategies for second language learners, assessment, parent involvement, and content-rich curriculum. These topics were covered over 15-weeks at each of the four community colleges.

Coaching Intervention

We employed a diagnostic/prescriptive model of coaching that focused on helping participants apply research-based strategies to improve child outcomes in language and literacy. Based on a review of the literature (Koh & Neuman, 2006), the model was designed to include the following elements:

  • On-site: Successful coaches meet teachers ‘where they are’—in their own practice settings to help providers learn through modeling and demonstrating practices (Poglinco & Bach, 2004).
  • Balanced and sustained: Coaches involve teachers in ongoing continuing education rather than just a temporary infusion or a rapid-fire string of professional development activities (Darling-Hammond, 1997; Guiney, 2001; Speck, 2002).
  • Facilitative of reflection: Effective coaches observe, listen, and support instructional practices that improve child outcomes; they don’t dictate ‘the right answer.’(Guiney, 2001; Harwell-Kee, 1999)
  • Highly interactive: Coaches establish rapport, build trust and engender mutual respect among practitioners and interact extensively to benefit children’s outcomes (Herll & O'Drobinak, 2004).
  • Corrective Feedback: Coaches provide descriptive, not evaluative or judgmental feedback based on observable events in settings to enable practitioners to engage in collaborative problem-solving for improving practice (Gallacher, 1997; Schreiber, 1990).
  • Prioritizes: Coaches assist teachers in identifying priorities and developing action plans for improving children’s language and literacy practices (Herll & O'Drobinak, 2004).

Based on these practices, we developed a coaching model that focused on the following cycle: Coaches engaged teachers in reflection and goal setting; they helped to identify desired outcomes and strategies to achieve these outcomes; collaboratively they developed an action plan for the implementation of new practices the following week which became the source of further reflection and action.

Sessions were weekly, one-on-one, and on-site, for approximately 1 to 1 ½ hours. Designed to align with the professional development course, they occurred simultaneously for the first 15 weeks; then continued throughout the academic year for an additional 17 weeks with a total of 32 sessions.

Evaluation Design

Evaluation Questions:

Our study was designed to examine the effects of an intensive professional development intervention designed to improve the language and literacy knowledge and practices of teachers working in high poverty early care and education settings. We hypothesized that teachers would significantly improve their support of language and literacy practices as a result of participating in a practice-based professional development approach that includes both coursework and coaching. We compared our approach to traditional coursework alone, and control group that reflected business as usual.

Evaluation Design

The study is a quasi-experimental design. The following information describes the selection of the sample:

Sample

Participants for the study were recruited by the state-wide 4Corganization in cooperation with the Department of Human Services’ Teacher Education and Compensation Helps program (T.E.A.C.H.). Begun in North Carolina, and funded by the Child Care Block Grant quality set-aside, T.E.A.C.H. was designed to provide scholarships and incentives for child care workers already in the field to receive professional development in ways that might advance their education, and improve quality practices (Cassidy, Buell, Pugh-Hoese, & Russell, 1995).

To be eligible for the project, practitioners needed to meet three criteria: 1) They needed to be open to taking a course at their local community college in pursuit of an associate’s degree in early childhood education; 2) they had to be employed at least 20 hours per week in a licensed child care center setting; and 3) they had to care for children ages three to five.

From an initial pool of 353 eligible child care centers and 1038 home-based centers in these priority areas, providers from 304 centers (168 Centers; 136 Home-based) agreed to participate in the project. Participants were then randomly selected to one of three groups: Group 1 (N=86), professional development 3-credit course in early language and literacy at their local community college; Group 2 (N=85), professional development course plus ongoing coaching; Group 3 (N=133), control group, no professional development course or coaching (with the understanding that such opportunities would be available at a later time).

Equally distributed across the four urban areas, the sample was all women and diverse, with 62% Caucasian, 30% African-American, 6% Hispanic, and 2% multi-racial. Two-thirds of the sample worked between 30 and 60 hours per week, and had considerable experience in child care (between 6-25 years). Their average age was 39 years old. Chi-square analyses indicated no significant differences across the three groups by race, age, or experience in child care. However, there were statistically significant differences in education level (p <.001), with the Control group reporting having taken more general education courses beyond high school than either of the two other groups.

Instrumentation

Based on our theoretical model of teacher development, we assumed that content knowledge expertise in early language and literacy aligned with practice-sensitive professional development might represent the most powerful approach for transforming teachers’ instructional practices, and improving child outcomes. To our knowledge, however, previous research has not measured increases in teachers’ content knowledge in early language and literacy; nor have there been direct linkages between the content of professional development, instructional practices and child outcomes. To better understand these relationships, therefore, it was necessary to construct instruments, which we detail below.

Teacher Knowledge Assessment of Language and Literacy

Designed to assess participants’ knowledge of early language and literacy, we constructed a multiple-choice, true-false assessment. Based on our belief that high quality early language and literacy instruction must rest on sound child development principles, 45 of the items tapped the eight core competencies (language and literacy), and 22, foundational knowledge in child development (based on NAEYC standards). Two forms of the assessment were developed for pre- and post-test purposes, with an average completion time of 45 minutes.

This assessment was reviewed by several experts in the field of early literacy to ensure that the content was accurate and research-based. Each community college instructor reviewed the assessment for content validity, and alignment with the course syllabus. On the basis of their comments, revisions were made.

The Teacher Knowledge of Language and Literacy assessment was then administered to over 300 upper level early childhood students. Results indicated excellent overall reliability (alpha =.96). Results from a confirmatory factor analysis of the nine subscales (eight in language and literacy, one in child development) resulted in only one factor with an eigenvalue of 3.20 (alpha = .74), accounting for 36% of the variance. These results indicated that the assessment worked well together to define a corpus of early language and literacy knowledge that could accurately be assessed by this instrument.

Teacher Practice

We used two measures to assess the quality of language and literacy practices in center-based and home-based care settings: The Early Language and Literacy Classroom Observation (ELLCO) (Smith & Dickinson, 2002), and the Child/Home Early Language and Literacy Observation (CHELLO) (Neuman, Dwyer, & Koh, 2007), which was specially developed to measure home-based practices. Both measures were based on the theoretical assumptions of ecological psychology (Bronfenbrenner, 1979), which attribute children’s learning to the influences of the physical and the instructional supports in their environments.

ELLCO: Composed of three interdependent research tools: The Literacy Environment Checklist, the Classroom Observation and Teacher Interview, and the Literacy Activities Rating Scale, the ELLCO is designed to measure the language and literacy environment for learning in center-based classrooms. The Literacy Environment Checklist, for example, assesses the visibility of such literacy-related materials as books, alphabet, word cards, teacher dictation, alphabet puzzles, and writing implements. The Observational Ratings span activities including reading aloud, writing, assessments, presence or absence of technology which are examined along a rubric of 1 (deficient) to 5 (exemplary). The Literacy Activities Rating Scale summarizes information on the nature and duration of literacy activities such as book reading and writing during the observation period.

Widely used in prekindergarten classes, developers have demonstrated its sensitivity to measure stability and change in language and literacy practices over time. Reliability of the instrument ranges in the .80’s. The instrument has been used extensively in Early Reading First programs and in studies to predict child outcomes (Roskos & Vukelich, 2006).

CHELLO: Designed to assess many of the same environmental characteristics as the ELLCO, the CHELLO examines language and literacy practices specific to the contextual features of family and home-based child care settings (Neuman, Dwyer & Koh, 2007). The CHELLO is composed of two interdependent research tools: The Literacy Environment Checklist, and the Observation and Provider Interview. The Literacy Checklist measures the presence or absence of 22 items in the environment, including the accessibility of books, writing materials, and displays of children’s work. The Observation focuses on the psychological supports in the educational environment, including teacher-child interactions in storybook reading, vocabulary development, and play. Similar to the ELLCO, the CHELLO uses a rubric ranging from a 1 (Deficient) to 5 (Exemplary).Psychometric properties show good internal consistency, with a Cronbach’s alpha of .82 for the checklist, and .91 for the Observation.

Although both the ELLCO and the CHELLO were designed as independent measures of the quality of language and literacy practices in center and home-based care settings, they share a common set of 19 items. By examining this subset of items (across all sections of the tool), we were able to make comparisons and contrasts of language and literacy practice outcomes across two very different educational settings, as well as to measure changes over time in these environments. Correlations between these shared items and the overall ELLCO and CHELLO were high (r=.91; .92) respectively.

Summary scores for each measure was computed and used for the analysis: An ELLCO composite score, reflecting observational ratings from all subscales (ranging from 1-124); a CHELLO composite score gathered from all subscales (ranging from 1-91); and a composite score of the 19 common items shared by ELLCO and CHELLO (ranging from 1-65).

Evaluation implementation

Differences in Language and Literacy Knowledge.

The first analysis was to examine the effects of professional development on teacher knowledge of language and literacy. Table 2 summarizes pre- and posttest scores on the Teacher Knowledge of Early Language and Literacy Assessment. Standard scores, ranging from 0-100, showed that, on average, these experienced providers clearly demonstrated some knowledge of key concepts in early literacy prior to taking the course.

Pretest scores in all groups were significantly higher for center-based teachers than those in family care settings (F (2, 289)= 10.02, p <.03). Nevertheless, there were no significant pretest differences between conditions for either the center-based or family-care settings (2, 289)= .475, p=n.s.

Timing and procedures used for data collection

Prior to the Intervention.

Prior to the start of the intervention, teachers in all three groups were administered the Teacher Knowledge of Language and Literacy Assessment. To provide easy access, the assessment was placed on the web; participants were assigned unique identifier codes and the information was immediately collected and coded into a database.

During the same period, observations were conducted of each center or home-based center by trained research assistants using the ELLCO or the CHELLO. To establish inter-rater reliability, observers independently rated 30 centers and home-based settings in pairs. Cohen’s kappa statistic (Cohen, 1960, 1968) was used to calculate reliability. Weighted kappas for the ELLCO and the CHELLO were substantial at 0.64 and .60, respectively (Landis & Koch (1977). Once inter-reliability was established, individual observers conducted all other observations.

During the Intervention.

Professional Development Coursework. Starting in September, participants in Groups 1 and 2 receiving professional development, attended the community college closest to them in proximity for a 15-week, 3-hour early language and literacy course. To ensure fidelity to the syllabus, coordinators at each of the community colleges visited classes on a regular basis. Observations indicated that approximately half of the class period was devoted to lectures and discussions on the topic of the week, followed by workshops activities designed to connect research to practice. Instructors met with research staff several times over the course of the semester to review students’ progress. There were few absences, and overall attrition was minimal (less than 2%).

Coaching. Fourteen coaches were recruited, hired, and supervised by community colleges. To be eligible, coaches needed to have a bachelor’s degree in early childhood, experience working with adults, previous early childhood teaching in the priority urban area, and knowledge of early language and literacy research-based practices. Two of the coaches had prior experiences mentoring adults; however, the majority of coaches did not. Nevertheless, all of them were seasoned professionals, with an average fifteen years of work experience in early childhood.

Prior to the coaching intervention, a two-day coaching institute was held, providing an orientation and training to coaches. Summary statistics of the ELLCO and the CHELLO were provided, highlighting key strengths and needs among the child care workforce who would be participating in the project.

Coaches were randomly assigned to participants based on their geographic location. Participants were called and informed that they would receive weekly coaching for the year. Although several providers were somewhat reluctant at first, all agreed to participate in coaching. Starting two weeks after the course had begun, coaches began their weekly visits.

A number of common procedures were implemented to ensure fidelity across all four community colleges. For example, to maintain consistent with the coaching model, coaches were required keep a log of their visits, and to document their daily progress with practitioners using a reflection form. On this form, they were asked to specify the language and literacy content area (s) being addressed, the goals set, and the strategies and action plans for completing next steps.

These reflection sheets were collected each week at debriefing meetings with their supervisors at the community colleges. These debriefing sessions gave coaches opportunities to review their notes with others, and to share experiences and resources with each other. They also served as an accountability mechanism to us, providing information on any missed or rescheduled sessions, as well as the number of hours they worked.

We also made unannounced visits to coaches throughout the year. Detailed observations from these visits in center-based and home-based settings provided us with a rich set of observations on the quality of the coaching sessions, and the interactions between the coach, caregiver, children and occasional parent volunteers (see Cunningham for discussion of the qualitative data, 2007).

Following the 15-week course, participants in all three groups took an equivalent form of the Teacher Knowledge in Language and Literacy Assessment. All tests were scored and coded and entered into the database. Group 1, professional development only, continued at the community college in their coursework (i.e. each provider was required to take 6-credit hours in total to receive their T.E.A.C.H. scholarship and stipend), Group 2, professional development plus coaching, continued to receive coaching, with Group 3, Control, receiving no intervention.

Post-intervention

Observations were once again conducted in centers and homes in late spring using the ELLCO and the CHELLO environmental instruments. The final N was 291 (Centers; Family-care), representing an overall attrition rate of 4.3%, due to reassignments, end of employment, or other unspecified personal reasons.

How the Data were Analyzed

Following the professional development course, however, post-test scores showed only modest improvements. On average, scores increased by only four points in the treatment groups for the center-based teachers, slightly higher by about six points for the home-based teachers compared to the control group.