School Improvement Plans Annotated Bibliography

Abrutyn, Leslye S. 2006. “The Most Important Data,” Educational Leadership. 63 (6): 54-57

The Penn-Delco school district in Pennsylvania utilizes a process known as a “walk-through” each year to gather data on student learning. Each fall, after sifting through the usual data and standardized tests that identify areas of needed improvement for a school, the school district engages administrators, teachers and “invited community leaders” to interview every student in the school and ask them how the school should improve, and their impressions of their education so far. They compile the data, and then in the spring perform another walk-through to see if the proposed improvements have been achieved. The program is seen as highly successful, because of the high levels of personal ownership that the students and teachers have in their own school’s improvement process. The concrete result is that in the twenty years the school district has engaged in the walk-through process, their standardized tests scores have gone up in all grades.

Anfara, Vincent A. Jr Faye Patterson, Alison Buehler, and Brian Gearity. 2006. “School Improvement Planning in East Tennessee Middle Schools: A Content Analysis and Perceptions Study.” NASSP Bulletin. 90 (4): 277-300

The authors begin the article with a standard literature review of school improvement plans, and bemoan the over-reliance on data-driven decision making in the academic improvement arena, rather than “intuition, tradition, or convenience in making decisions about the best practices and strategies to improve student learning.” They then went on to analyze the Tennessee School Improvement Planning Process (TSIPP), and its application in Eastern Tennessee middle schools. The researchers looked at 17 middle schools that had completed a SIP in the 2001-2002 school year. They did a content analysis of the SIPS and found that the SIPS relied heavily on boilerplate and regulations from the Tennessee department of education, and are not particularly effective at actually improving schools. They are good for finding areas needed for targeted improvement, but the over-reliance on improving standardized test scores does not necessarily equate to any tangible academic gain for the children.

Archer, Jeff. 2006 “The Story Behind the Stories.” Education Week. 26 (2): 36-40

The journal, Education Week, reported in its first issue that President Reagan wanted to downgrade the national department of education from a cabinet level position, and give more power back to the states and municipalities in educational decision-making. Education Week broke that story, and for 25 years has been fighting to make education a national and federal area for discussion. The article is not scholarly, or particularly useful to the discussion. It instead recounts Education Week’s successes and failures in education coverage. It is a simple “look-back” on Education Week’s history.

Armstrong, J. Scott. The Value of Formal Planning for Strategic Decisions: Review of Empirical Research. Strategic Management Journal, Vol. 3, No. 3. (Jul.-Sep.,), pp. 197-211.

Using pre-existing empirical research from the field of organizational behavior, this paper attempts to answer the following three questions. 1) Is formal planning[1] useful for strategic decision making? 2) Which aspects of the planning process are most useful? 3) In what situations is planning most useful? The authors find inconclusive evidence regarding usefulness. Stakeholder participation, alternative strategy, feedback monitoring, and commitment to objectives are three of the most important aspects of the overall planning process. No evidence is found to indicate preferred settings for implementation of strategic planning. Armstrong proposes that inefficient markets, large changes, high uncertainty, and high complexity increase the need for formal planning. The paper’s observations about useful aspects of planning (e.g. the paragraph on monitoring results) will be valuable for evaluation of School Improvement Plans. It is important to recognize however, that within the research examined, the yardstick for measuring the usefulness of formal planning was “profits and growth” and not education improvement.

Berry, Francis Stokes; Barton Wechsler. State Agencies’ Experience with Strategic Planning: Findings from a National Survey. Public Administration Review, Vol. 55, No. 2. (Mar.-Apr., 1995), pp. 159-168.

Strategic planning is defined by the authors as a systematic process for managing an organization and its future direction in relation to its environment and the demands of external stakeholders. Their article was written while strategic planning was still viewed as a “hot” innovation in public administration. It describes the results of a survey they performed designed to determine how broadly and successfully strategic planning was being used among twenty state agency types in fifty states. Questionnaires were submitted to directors of all state agencies (N=987). Survey results indicate that agency leaders learned about strategic planning primarily from chief executives who brought pertinent experience from private industry. The Harvard Policy model predominated in the agencies surveyed. Among respondents, comments and assessments were generally positive. Thus, the authors forecast “bigger and more important impacts in the future”. The article fails to adequately answer how broadly the system is used. With regard to School Improvement Plan research, this article is too wide in scope to offer any contribution. Apart from respondent opinion, the survey provides no data that can be used to quantitatively measure effectiveness.

Brown, Elizabeth Todd and Julie A. Thomas. 1999. “Expecting the Best, Producing Success.” Peabody Journal of Education. Vol. 74, No. 3/4: 224-235

This article summarizes the collaborative efforts between Wheeler Elementary School and the University of Louisville to make Wheeler into a Professional Development School. The university’s purpose was to develop a model that would not only assist their Education graduate students in their development, but would also benefit current teachers and their students utilizing the approach Wheeler has adopted.

The Wheeler PDS model contains three elements:

1.  Collaboration, which is found in the teacher team structure. A group of teachers work in a team at specific stages of the students’ academic development. With each teacher teaching the students at different grades, they are able to discuss weaknesses, strengths and other relevant aspects to better instruct their students. They also have strong interactions with the University of Louisville’s faculty to assist their teaching efforts as well as help from programs outside of the university and the school district and a committee structure.

2.  Involvement. Here, teachers and parents work together to provide better experiences for student learning. They do this though community meetings where feedback is presented and information sessions where teaching approaches are presented and evaluated. In both cases, teachers and parents are interacting to create better learning environments for students.

3.  Achievement. Here, the results from Wheeler and University efforts are evaluated. They depict a range of increases in student achievement and attribute these increases to the development and implementation of the Wheeler PDS model.

The article concludes by noting that, while the efforts of Collaboration, Involvement, and Achievement are successful, the evolution of the model must continue. The belief is this will promote the innovative mindset that have made Wheeler stand out and help to develop more techniques and programs that will improve student achievement.

The successes of this program may give our project a good comparison to what works and what the Clark County School District and its schools may be developing to improve student achievement.

Brown, Elizabeth Todd and Julie Thomas. 1999. "Expecting the Best, Producing Success." Peabody Journal of Education. 74 (3/4): 224-235

This article was written by the principal and a math teacher at Wheeler Elementary School in Kentucky. Wheeler was selected to be a “Professional Development School” (PDS) in the early 1990’s. This case study shows some of Wheeler’s effective and non-traditional techniques. One example is collaboration; teachers at Wheeler form three person teams and stay with the same group of students for multiple years. The three teachers share responsibility for planning, student assessment, and teaching strategy, and help each other develop these strategies. Another reason PDS are so successfully rated in KY is the openness and good communication policy with parents. PDS schools are expected to be transparent in their methods, and to brief parents effectively on non-traditiona teaching strategies. The researchers then go on to detail the schools successes in increasing test scores and other accountability measures. They say that sustained teacher professional development, collaboration among teachers, and parent involvement are the most important factors in their schools success.

Chen, Eva, Margaret Heritage, and John Lee. 2005. “Identifying and Monitoring Students’ Learning Needs With Technology.” Journal of Education for Students Placed at Risk. (10)3: 309-332.

The researchers examine Quality School Portfolio, a “Web-based decision support tool” first developed at UCLA. They present a case study wherein they demonstrate how emerging technologies can assist in “identifying low-performing students and in planning interventions to meet their needs.” If at risk, or low performing students are identified early, then they have more opportunity to improve and receive the help they need. QSP is very similar to SPSS. It allows researchers to analyze data related to both individual students, and to aggregates and groups. QPS can analyze standardized test scores, demographic data, classroom grades and other markers to tease out descriptive statistics and correlations in the data. QPS can be used to develop risk factors for low performing students, and identify these students as needing extra help or specific instruction practices.

Clark, Terry A. and Dennis McCarthy. 1983. "School Improvement in New York City: The Evolution of a Project." Educational Researcher 12 (4): 17-24.

The article documents SIP implementation in an initial group of 10 schools, followed by a second group of nine schools, over a three-year period in NYC in the early 1980's. The architect of the plan focuses on what he believes to be the five most important factors that characterize schools with high-achieving students: strong administrative leadership; orderly school climate; emphasis on basic skills acquisition; high expectation for student achievement; and monitoring of pupil progress. The initial stages focused on school-based planning; implementation followed an eight-step plan that began with program introduction and concluded with maintenance (which included implementation, evaluation, and revision processes become cyclical). The eight project phases were monitored by the Office of Educational Evaluation and funded by the city and the Ford Foundation. Assessments were based on a liaison (between the city and the individual schools) report, test scores, qualitative data such as interviews and questionnaires from project participants, and relevant program documents. The schools were selected on the basis of application, with criteria being (1) voluntary participation with the principal, (2) a match between school needs and SIP objectives and (3) a lack of development programs in the building. The demographics of the schools showed that they were majority-minority (although two were predominantly white) and that the percentage ofstudents reading "at or above grade level" was varied. In the first year of implementation, the study found that the first group of schools showed an 80% completion rate (two schools dropped over disagreements) and that the most important predictor of success was the school’s willingness to admit, a la AA meetings, that the school had a problem, and the degree to which it sought help (r =.74). In the second year, the first group implemented reforms geared around the five-point plan, and the authors argue that in this case, the principal of the school were the plans were implemented was the most important predictor: the more active the principal, the better the results. In the schools that showed little improvement, the principal was also found to have doubts about the plan. The results of the second group were similar. In their third year analysis, coupled with their assessment, the authors argue that the answer to their research question (how has participation in SIP affected student achievement?) is that SIP implementation results in improved reading levels (the authors note that statistical significance can not be tested due to the qualitative nature of the results). The authors conclude that the best predictors were (1) voluntary participation and (2) alterable roles of those involved in the project, such as the liaison and the principal. Conclusion: An extraordinarily poorly written article shows the ineffectiveness of trying to portray three years worth of conclusions into seven pages. The authors show a complete disregard for systematic analysis (a lack of theory-drive research, selecting cases on the dependent variable, non-random samples, ineffective pre and post test comparisons, the lack of an included control group are the more obvious problems) and the results are inconsistent with their findings. The authors argue that reading levels have been improved, but offer no alternative for the intervening variable besides the implementation of SIP, nor does the study mention whether testing procedures remained constant throughout the time period. All in all, a worthless effort, save perhaps for its initial five-point typology.

Clark, Terry A. and Dennis McCarthy. 1983. "School Improvement in New York City: The Evolution of a Project." Educational Researcher 12 (4): 17-24.

This article documents SIP efforts for elementary and middle schools in New York City. These schools used a plan developed by Ronald Edmonds, an expert studying successful schools brought in by educators in the area. His first task was a literature review identifying factors contributing to successful schools. His studies determined 5 factors that were observed in the model school:

1.  Strong administrative leadership

2.  An orderly school climate, which includes an account of the physical state of the facilities

3.  Emphasis on basic skills acquisition

4.  High expectations for student achievement

5.  Monitoring of pupil progress

In addition to these criteria was the involvement of a liaison, 5 committees at each of the individual schools (one for each factor), and a three phase process that took place over the course of three years.

Year One: Observations were made to determine what the individual schools needed in order to meet the 5 factors criteria set up by Edmonds’ study. Liaison involvement is heavy at this first stage.

Year Two: These determinations will then be implemented in this year and the involvement of the liaison begins to decrease as the committees, administrators, faculty, parents and communities become more involved.