COMPLIANCE WITH LEGALLY MANDATED TEACHER EVALUATION PRACTICES: A PRELIMINARY INQUIRY

By: Leonard H. Elovitz and Nicholas Celso

Kean University

Background and Statement of the Problem

In 1975, the New Jersey Legislature adopted a law (NJ Stat. Ann. 18A:25-6 (1975)), which requires the observation and evaluation of nontenured teachers. In pertinent part, the law provides that:

Every board of education in this State shall cause each nontenure teaching staff member . . . to be observed and evaluated in the performance of . . . his duties at least three times during each school year, but not less than once during each semester . . .Each evaluation shall be followed by a conference between that teaching staff member and his . . .superior. The purpose of this procedure is to recommend as to reemployment, identify any deficiencies, extend assistance for their correction and improve professional competence.

Based on the authority of the foregoing enabling statute, the New Jersey Department of Education (DOE) subsequently adopted regulations mandating specific practices and procedures to be followed in conjunction with the observation and evaluation of all teachers in the State’s public schools (NJ Adm. Code 6:3-1.19, effective January 19, 1976 (1976)). In so doing, the DOE extended certain aspects of the evaluation requirements to include tenured teachers, as well. These regulations (hereinafter referred to as “code” or “regulations”), which have the force of law, most recently have been recodified at NJ Adm. Code 6A:32-4.5 (2006), without significant modification.1 The original enabling statute continues in full force and effect as well. (NJ Stat, Ann, 18A: 27-3.1(2006)).

Pursuant to the regulations, every board of education is required to “adopt a policy for the supervision of instruction, setting forth procedures for the observation and evaluation of all nontenured teaching staff members,” and that policy is to be distributed to staff members at the beginning of their employment (NJ Adm. Code 6A:32-4.5(b) (2006)). Similarly, boards are required to develop policies and procedures for the evaluation of tenured teaching staff members. (NJ Adm. Code 6A:32-4.4 (a) (2006)). These policies and procedures, which are to “be developed under the direction of the district’s chief school administrator in consultation with tenured teaching staff members” (NJ Adm. Code 6A:32-4.4 (c) (2006)), are to be distributed annually to teaching staff members prior to October 1st. (NJ Adm. Code 6A:32-4.4(d) (2006)).

Each of the three observations of nontenured teachers must be conducted for a minimum of one class period in a secondary school, and for one complete subject lesson in an elementary school. (NJ Adm. Code 6A:32-4.5(a) (2006)). Formal class observations of tenured teachers are not expressly required by code. However, the preparation of an annual evaluation is (NJ Adm. Code 6A: 32-4.4 (a)7 (2006)), and observation of classroom instruction is listed as a method of data collection (NJ Adm. Code 6A:32-4.4(c)3(2006)). This is generally construed to mean that one formal classroom observation is required for tenured teachers.

In addition to the observations and evaluations discussed above, an annual written evaluation of both tenured and nontenured teachers should include, but not be limited to: (1) Performance areas of strength; (2) Performance areas needing improvement based upon the job description; (3) An individual professional development plan (PDP) developed by the supervisor and teaching staff member (The PDP replaces the individual professional improvement plan (PIP) required in the 2005 regulations.) 2; and (4) A summary of available indicators of pupil progress and growth, together with a statement of how these indicators relate to the effectiveness of the overall program and the performance of the individual teaching staff member (NJ Adm. Code 6A:32-4.4 (f) and 4.4. (c) (2006)).

Thus, a legislatively and administratively prescribed means to improve teaching has been available in the State for over thirty years. However, our discussions with graduate students enrolled in the Program for Educational Leadership at Kean University, most of whom are teachers in central and northern New Jersey, suggest that the pertinent regulations are perhaps only partially or wrongly implemented, or ignored completely.

Purpose of the Study

The purpose of this preliminary study, therefore, was to attempt to estimate the level of compliance by public school districts in central and northern New Jersey with legally mandated teacher evaluation practices, and further, to search for ostensibly related factors that might be associated with compliance/noncompliance in order to chart a course for further, more intensive research.

Methods and Procedures

During the 2005-2006 academic year, we asked graduate students matriculating for certification as principals and/or supervisors at Kean University’s Nathan Weiss Graduate College to obtain and submit copies of the teacher observation/evaluation forms currently in use in their school districts. This convenience3 sample resulted in an unduplicated count of twenty-one summative4 instruments, representing 7 (one-third) of New Jersey’s twenty-one counties. As can be seen from Table I, the sample represents the broad spectrum of diversity of the State’s public school districts, including three city districts, the State’s largest regional high school district, a vocational school, as well as suburban and rural school districts. Notwithstanding this diverse sample, it does not include any districts from the southern portion of the State. Accordingly, any inferences to be drawn from the data should be limited to the central—and to a lesser extent—the northern regions of New Jersey.

As discussed in greater detail below, discussions of the processes and procedures relevant to the subject evaluations were informally conducted in graduate classes at Kean University.

{Insert Table I}

Analysis

We analyzed each summative instrument for compliance with 6 specific legal requirements, and determined whether or not the instruments clearly evidence, or did not clearly evidence, compliance with each criterion. In order to be deemed to be compliant, the instrument unequivocally had to require the observer/evaluator to address a specific legally mandated criterion. If the instrument addressed the mandated criterion in any way that required the evaluator to comment upon it or to rate its attainment, it was deemed to be in compliance. Conversely, where the instrument was totally silent in this regard, it was deemed non-compliant.

Additionally, as mentioned, we engaged students in personal discussion concerning their school districts’ teacher evaluation practices and procedures. Their comments were recorded, compared and included in our summary. Again, because of the preliminary nature of this inquiry, we were more concerned with identifying areas of compliance for future study. The summary of the interviewees’ comments presented below, therefore, represents more of an endorsement of our initial perceptions than a scientifically tested confirmation. The results of this analysis are presented in Table II.

{Insert Table II}

Findings

Post-observation Conference

N.J.A.C. 6A: 32-4.5 (d) requires a post-observation conference within 10 days of the observation of the nontenured teacher. Inexplicably, no such requirement is listed for the tenured teacher, however, observation conferences are to be included in the policies and procedures developed for the evaluation of tenured teaching staff members. N.J.A.C. 6A:32-4.4 (c) 4. The need for such a conference, therefore, can be directly implied from this requirement.

As can be seen from the data reported in Table II, approximately half (48%) of the instruments analyzed fail explicitly to address this requirement.

Designation of Areas of Strength/In Need of Improvement

The data reveal that all of the instruments sampled address areas of strength and in need of improvement. However, only 5 had a section explicitly entitled, “Areas of Strength” and “Areas Needing Improvement.” Of the others, 9 had checklists, including various characteristics of teaching next to a Likert-type scale designating various levels of assessment. The remainder had blank spaces to be filled in as narratives under specified teaching characteristics.

Summary of Indicators of Student Progress

Fully 57% of the sampled instruments fail clearly to address a summary of indicators of student progress. Nearly all (95% and 90%, respectively) fail clearly to address the provision of a statement explaining how pupil progress indicators relate to effectiveness of the overall program, and how such indicators relate to performance of the individual teacher.

Reference to Professional Improvement Plan (PIP)

More than one-third (38%) of the sampled instruments fail clearly to incorporate any mention of a professional improvement plan. (Note: the PIP has been changed to the PDP in the new code)

The foregoing findings preliminarily suggest widespread noncompliance with various legally mandated teacher evaluation practices. In order to gain a deeper insight, we engaged students in personal discussions concerning their schools’ practices and procedures. Although not scientific, this process further underscores the need for additional investigation. A brief summary of their comments follows:

1.  Although the regulations require distribution of local teacher evaluation policies and procedures to tenured teachers no later than October 1 (NJ Adm. Code 6A:32-4.4 (d)), and to nontenured teachers at the beginning of their employment (4.5 (b)), many recall receiving such documents when they were hired, but few report that they received anything after gaining tenure.

2.  Although it is clear in the code that the annual evaluation should be in addition to the 3 required observations for nontenured teachers, (NJ Adm. Code 6A:4.5 (c ) (2006), students report that this does not always occur.

3.  Notwithstanding the clear requirement for a post-observation conference, students report that the written report appears in their mailbox with a note that they should sign it and return it or make an appointment with the supervisor if they have questions.

4.  Students indicate that some supervisors get around the required use of indicators of pupil progress simply by listing assessment instruments used in the classroom by the teacher.

5.  Despite the former code’s express requirement that each teacher have an “Individual professional improvement plan” (NJ Adm. Code 6A:3-4.1(d)3;4.3(f)4 (2005)), many students report that they were given full authority over what their PIP objective and activities are going to be. In some cases, all of the teachers in the school or a department are given the same PIP.

Discussion

The State Legislature and Department of Education have attempted to prescribe a means for the improvement of professional practice in our schools by mandating minimum procedural and substantive teacher evaluation requirements. Through that process, it is clearly intended that the supervisor should utilize direct observations and subsequent conferences with teachers formatively to improve instruction and to gather data for the summative evaluation, relying on other inputs as well, such as indicators of pupil performance. The evaluation document should point out areas of strength to encourage the teacher to keep exhibiting desirable behaviors. The document also should include areas needing improvement to put the teacher on notice that certain behaviors have to be modified. The PDP objectives should then be derived directly from this part of the evaluation and be incorporated into discussions emanating from future observations. Indeed, an important purpose of the required supervisor/teacher conference is to facilitate the joint development of the PDP. It is important to note that the teacher does not have veto power over the PDP objectives (Douma v. East Brunswick Board of Education (1981)).

In addition to all of the foregoing, it is unequivocally the intent of the Department of Education that indicators of student progress be used systematically to address the individual effectiveness of teachers. Perhaps the most significant, although not surprising, finding of the present study is the widespread failure of the sampled districts to comply with this mandate. This is not surprising, however, given that this has been a controversial topic over the years. The question, whether any individual teacher’s effectiveness can be causally linked to student success has been a difficult one with which to grapple because of the multivariate nature of factors contributing to measured student outcomes. Although it may be legitimate to question the extent to which any one teacher is directly accountable for student success or failure, it is beyond doubt that the teacher plays an important role. More to the point, however, is the fact that the use of indicators of student progress as part of the teacher evaluation process is not optional—it is legally required. As indicated above, the regulations clearly call for the inclusion of indicators of pupil progress as part of the summative process.

The vast majority of observation/evaluation instruments sampled not only fail clearly to require the use of pupil process indicators in the manner required by the regulations, but exhibit substantial noncompliance with other legal requirements. Clearly, further investigation is warranted to determine more precisely the correlates and/or causes of such noncompliance. For example, future inquiries might consider such factors as recent restrictions (NJ Stat. 18A: 7F:-5c.(1) 2006)) on local administrative expenditures; school size and/or the ratio of administrative staff to teachers, as well as district wealth, some or all of which might play a significant role in explaining the level of compliance/non-compliance observed. Pending the completion of such inquiry, it appears likely at the moment that there is substantial variation between the legally prescribed model for teacher evaluation and that which currently is being implemented in the schools.

Endnotes

1. The analysis conducted in this study is predicated upon the 2005 administrative code, since the instruments examined were collected at that time. However, references are provided to the 2006, revised code, which updates the former code without significant modification.

2. The revised code replaces the term, “individual professional improvement plan”, with the term, “individual professional development plan.” However, the underlying procedures and rationale for development of the plan remain substantially unchanged.

3. A limitation of this preliminary inquiry is the possible bias introduced by a convenience sample. A follow-up study will utilize scientific sampling techniques.

4. Both formative and summative evaluations are required under the relevant administrative code provisions.

References

DOE Data. 2005-06 Enrollment [Online] Available: http://www.nj.gov/njded/data/enr/enr06/.

Douma v. East Brunswick Board of Education, NJ School Law Decisions, 443, 460, April 22, 1981.

NJ Adm. Code Title 6A (West 1976).

NJ Adm. Code Title 6A (West 2005).

NJ Adm. Code Title 6A (West 2006).

New Jersey Department of Education. Comparative Spending Guide, 2006 [Online] Available: http://www.state.nj.us/njded/2006.