School-wide Positive Behavior Support
Evaluation Template
October, 2005

Rob Horner, George Sugai, and Teri Lewis-Palmer

University of Oregon

Purpose

This document is prepared for individuals who are implementing School-wide Positive Behavior Support (PBS) in Districts, Regions or States. The purpose of the document is to provide a formal structure for evaluating if School-wide PBS implementation efforts are (a) occurring as planned, (b) resulting in change in schools, and (c) producing improvement in student outcomes.

The organization of this template provides (a) an overview of the context within which School-wide PBS is being used, (b) a common set of evaluation questions, (c) evaluation instruments/procedures that address each questions, and (d) samples of evaluation data summaries that can be provided and used to build formal evaluation reports.

Context

School-wide positive behavior support (SW-PBS) includes a range of systemic and individualized strategies for achieving social and learning outcomes while preventing or reducing problem behavior for all students. School-wide PBS includes universal prevention of problem behavior through the active teaching and rewarding of appropriate social skills, consistent consequences for problem behavior, and on-going collection and use of data for decision-making. In addition, School-wide PBS includes an array of more intensive supports for those students with more severe behavior support needs. The goals within School-wide PBS are to prevent the development of problem behavior, to reduce on-going patterns of problem behavior, and to improve the academic performance of students through development of a positive, predictable and safe school culture.

School-wide PBS is being implemented today in over 4300 schools throughout the United States. Each of these schools has investing in training on school-wide PBS strategies, has a team that is coordinating implementation, and is actively monitoring the impact of implementation on student outcomes.

As more schools, districts, states and regions adopt School-wide PBS there will be an increasing need to formally evaluate if these training and technical assistance efforts (a) result in change in the way schools address social behavior in schools, and (b) result in change in the behavioral and academic outcomes for students.

Need for Evaluation

School-wide PBS will continue to be adopted across the U.S. only if careful, on-going evaluation of the process and outcomes remains a central theme. Evaluation outcomes will both document the impact of School-wide PBS, and guide improvement in the strategies and implementation procedures. Evaluation may occur at different scales (one school, versus a district, versus a state or region), and different levels of precision (Local self-assessment, versus state outcome assessment, versus national research-quality analysis). The major goal of evaluation is always to provide accurate, timely, valid and reliable information that is useful for decision-making. The stakeholders and decisions being made will always shape the evaluation. We recognize that the type, amount and level of information gathered for an evaluation will vary. It is very likely that no two evaluation efforts will be exactly the same. At the same, there will be value in identifying common evaluation questions, data sources, and reporting formats that may be useful across evaluation efforts. This evaluation template is intended to benefit those building evaluation plans to assess school-wide PBS. Our hope is that the measures and procedures defined in the template will make it easier for others to design evaluation plans, and that over time a collective evaluation database may emerge that will benefit all those attempting to improve the social culture of schools.

In building an evaluation plan we recommend (a) beginning with the decisions that will be made by stakeholders, (b) organizing the core evaluation questions that will guide decision-making, (c) defining valid, reliable and efficient measures that address the evaluation questions, and (d) presenting information in an iterative, timely and consumable format.

Evaluation Decisions

Our experience suggests that most efforts to implement School-wide PBS begin with a “special” investment by the state, region or federal government in a demonstration effort designed to assess (a) if School-wide PBS can be implemented effectively in the local area, (b) if School-wide PBS results in valued outcomes for children, families and schools, and (c) if School-wide PBS is an approach that can be implemented in a cost-effective manner on a large scale.

The decisions that guide a formal evaluation will focus simultaneously on issues of (a) accountability and oversight (e.g., did the project conduct the activities proposed?), (b) the impact of the project (e.g. was there change in school practices, student behavior, academic outcomes), and (c) implications for further investment needed to take the effort to a practical scale.

An on-going challenge for any evaluation of School-wide PBS is that the individual behavior of children and adults functions as the target of intervention efforts, but the “whole school” is the unit of most evaluation analyses. In essence the goal of School-wide PBS is to create a “whole school” context in which individuals (both faculty and students) are more successful. Most evaluations will reflect this attention to individual behavior, with summaries that reflect the global impact on the whole school.

As evaluation plans are formed there are some common traps that are worth avoiding.

1.  Evaluation plans are strongest when focused on real outcomes (change in school practices and student behavior)

  1. It is possible for evaluation reports to focus only on counts of training events and participant satisfaction. These are necessary, but insufficient pieces of information.

2.  Evaluation plans should examine student outcomes only when School-wide PBS practices have been implemented.

  1. It is important to know first if training and technical assistance resulted in change in the behavior support practices used in schools
  2. An important “next” question is if those schools that implemented to criterion saw change in student outcomes. If schools did not implement School-wide PBS practices, we do not expect to see changes in student outcomes.

3.  Evaluation plans often focus only on initial training of demonstration sites, and ignore the capacity development needed for large-scale implementation.

  1. School-wide PBS efforts focus simultaneously on establishing demonstrations of effectiveness (individual schools), and the capacity to expand to a socially important scale. There often is the assumption that initiatives start with a small demonstration, and only after the demonstration is documented as viable and effective does work begin on large-scale capacity building. Within School-wide PBS there is an immediate emphasis on building the (a) coaching network, (b) local trainers, and (c) formal evaluation structure that will be keys to taking School-wide PBS to scale. Establishing a Leadership Team with broad vision and mandate is part of the first step toward implementation of School-wide PBS.

Evaluation Questions

In different contexts different evaluation questions will be appropriate. In general, however, School-wide PBS will be implemented within the context of an initiative to change school discipline across a portion of schools in a geographic area (district, state, region). Efforts to provide evaluation of the School-wide PBS implementation effort often will address the following evaluation questions:

1.  Who is receiving training and support?

  1. What schools are receiving implementation support?
  2. What proportion of schools in the target area is implementing school-wide PBS?

2.  What training and technical assistance has been delivered as part of the implementation process?

  1. What training events have been conducted?
  2. Who participated in the training events?
  3. What was the perceived value of the training events by participants?

3.  Has the training and TA resulted in change in the behavior support practices used in schools?

  1. Are the faculty in participating schools implementing universal school-wide PBS?
  2. Are the faculty in participating schools implementing targeted and intensive individual positive behavior support?

4.  If schools are using SW-PBS is there an impact on student behavior?

  1. Has there been a change in reported student problem behavior?
  2. Office discipline referrals
  3. Suspensions
  4. Expulsions
  5. Referrals to special education
  6. Has there been change in student attendance?
  7. Has there been change in student academic performance?
  8. Has there been a change in perceived risk factors and protective factors that affect mental health outcomes?

5.  Has the Training and Technical Assistance resulted in improved capacity for the state/district/region to sustain SW-PBS, and extend implementation to scale?

  1. To what extent has the implementation effort resulted in improved capacity of the area to train others in school-wide PBS
  2. To what extent has the implementation effort resulted in improved capacity to coach teams in school-wide PBS procedures?
  3. To what extent do local teams have evaluation systems in place that will allow them to monitor and improve school-wide PBS?
  4. To what extent does the state or district Leadership Team have an evaluation system in place to guide broad scale implementation efforts?

6.  Are faculty, staff, students, families, and community stakeholders satisfied?

  1. Are faculty satisfied that implementation of school-wide PBS is worth the time and effort?
  2. Are students satisfied that implementation of school-wide PBS is in their best interest?

7.  Policy impact

  1. Have changes in student behavior resulted in savings in student and administrator time allocated to problem behavior?
  2. Have policies and resource allocation within the area (district, school, state) changed?

8.  Implications

  1. Given evaluation information, what recommendations exist for (1) expanding implementation, (2) allocating resources, (3) modifying the initiative or evaluation process?

Evaluation Measures/Instruments

Evaluation plans often incorporate an array of measures to address the core evaluation questions. Some measures are purely descriptive, or uniquely tied to the local context. Other measures may be more research-based, standardized, and experimentally rigorous. Issues of cost, time, and stakeholder needs will affect which measures are adopted. To accommodate variability in evaluation needs, a comprehensive model should offer multiple measures (some more rigorous, and some more efficient) for key questions. The list of measures provided below is not offered with the assumption that all measures would be used in every evaluation plan, but that each plan may find a sampling of these measures to be useful. We also recognize and encourage the use of additional, locally relevant measures.

Evaluation Questions/Focus / Measures
(Research-Validated Measures in Bold) / Typical Data Collection Cycle / Metric and use of data
Who is receiving training and technical support? / School Profile / Completed when a school begins implementation.
Updated annually / Name, address, contact person, enrollment, grades, student ethnicity distribution.
What training and TA has been delivered?
Was the training and TA identified as useful by participants? / List of training events, persons participating, and training content.
Training Evaluation Form / Collected as part of each major School-wide PBS Team Training Event / Documents the teams and individuals present, the content of training, and the participant perception of workshop usefulness
Has the training and TA resulted in change in the behavior support practices in schools? (Change in adult behavior) / Team Implementation Checklist (TIC)
EBS Self-Assessment Survey
Systems-wide Evaluation Tool (SET)
Individual-Student Systems Evaluation Tool (I-SSET)
School-wide Benchmarks of Quality (Florida) / The TIC is collected at the first training event, and at least quarterly thereafter until 80% criterion met.
The EBS Self-Assessment Survey completed during initial training, and annually thereafter
The SET is completed annually as an external, direct observation measure of SW-PBS practice implementation.
The I-SSET is administered with the SET annually by an external evaluator.
The BoQ is completed by school teams, and assesses the same features as the SET, but based on team perception / The TIC provides a % implementation of Universal Level SW-PBS practices. Plus a sub-scale score for each of the SET subscales.
The EBS Survey produces % of staff indicating if School-wide, Specific Setting, Classroom and Individual Student systems are in place, and important for improvement.
The SET produces a total % score and % for seven subscales related to Universal level SW-PBS practices.
The I-SSET produces three scores: The % to which “foundation” practices are in place for individual student support; the % to which “Targeted” practices are in place; and the % to which “Individualized, Function-based Support” practices are in place.
The BoQ produces a summary score for implementation, and sub-scale scores for SET factors.
If SW-PBS is implemented at criterion, is there improvement in the social and academic outcomes for students? / School-wide Information System (SWIS)
School Safety Survey (SSS)
Yale School Climate Survey (SCS)
State Academic Achievement Scores
(Unique to each state) / SWIS data are collected and summarized continuously.
The SSS typically is administered annually by an external observer at the same time as SET evaluations.
The SCS is a direct survey of students and/or adults that is collected annually, or on a formal research/evaluation schedule.
Annual assessment of literacy, math, etc. scores. / SWIS data indicate the frequency and proportion of office discipline referrals, suspensions and expulsions.
The SSS produces a perceived Risk Factor score and a perceived Protective Factors score. The SSS is one index of the overall “safety” of the school.
The SCS produces a standardized score indexing the perceived quality of the social culture of the school.
The typical outcome is the proportion of students within identified grades (e.g., 3, 5, 8, 10) who meet state standards.
Have the training and TA efforts resulted in improved local capacity to implement SW-PBS? / SW-PBS Registry
Leadership Team Self-Assessment Survey / Completed when initiative begins, and maintained as new people are identified.
Completed by the Leadership Team at least annually. Provides a summary score and sub-scale scores. / Provides listing of
*Leadership Team
*Coordinators
*Local Trainers
*Coaching Network
*Evaluation Team
*Schools Implementing SW-PBS
Are Faculty, Staff, Students, Families, Community Stakeholders Satisfied? / Faculty Impact Assessment
Student Impact Assessment / Completed 3-6 weeks after teaching school-wide expectations
Completed 3-6 weeks after teaching school-wide expectations / Provides a Likert-like rating of perceived usefulness of SW-PBS practices
Provides an index of whether students learned school-wide expectations, and if they find the SW-PBS process useful.
Do improvements sustain over time? / TIC, SET, BoQ, I-SSET
Leadership Team Self-Assessment
SWIS, SSS, Standardized Test / Annual assessment of the proportion of schools that meet criterion from one year to the next. / Provides summary of extent to which schools that reach criterion and produce valued gains, sustain those achievements.
Policy and Future Implementation Evaluation Concerns. / Cost Analysis
(Unique to each initiative) / Collected annually / Document if savings accrue as a function of investing in SW-PBS.

Evaluation Report Outline