Program and Policy Evaluation Workshop

Project Directors’ Meeting for CSP Grants for Replication and Expansion of High Quality Charter Schools

March 6, 2014

Washington, DC

Presented By:

Karen Shakman

Education Development Center

Logic Models as a Tool for Building Program Evaluations

A Workbook Created by

the Regional Educational Laboratory-Northeast and Islands

Table of Contents

From Logic Models to Program and Policy Evaluation (1.5 hours)

Annotated Agenda

Session II Purpose

Pre-Work Assignment

Reviewing Logic Models

Case Examples Revisited

Guidelines for Reviewing a Logic Model

Introducing Evaluations

Types of Evaluations

From Logic Models to Evaluation Questions

Developing Evaluation Questions

Generating Indicators

Using the Logic Model to Generate Indicators

Identifying the Right Indicators

Quantitative and Qualitative Indicators

Building an Evaluation Design

What type of evaluation is best for you?

Identifying Appropriate Sources of Data

Creating a Data Collection Framework

The Continuum of Rigor in Outcome/Impact Evaluation Designs

Putting it all together

Evaluation Overview/Prospectus

Example of Gantt Chart

Closing and Thank You

Activity: What did you learn?

References and Resources

Logic Model Resources

Evaluation Resources

Appendix A: Simple Logic Model Template

Appendix B: College Ready Sample Logic Model

Appendix C: District Educator Evaluation Logic Model

From Logic Models to Program and Policy Evaluation (1.5 hours)

Annotated Agenda

5 Minutes / Overview of Second Session / Facilitator will review the goals of the session and the agenda.
8 Minutes / Review Logic Models / Facilitator will review logic models, what they are useful for, and what limitations they have. A graphic of a simple logic model and a logic model from one of the cases will be reviewed. Facilitator will also introduce the cases that will be employed as examples.
15 Minutes / Introducing Evaluations / Facilitator willintroduce types of evaluations, the value of implementing evaluation at the onset of program development, and the role that logic models may play in supporting evaluation. Two overall types of evaluation will be presented-evaluations that focus on improvements and those that focus on proving, or demonstrating, the impact or outcomes of an initiative.
Activity: The activity will have participants brainstorm ways they know their program or policy is achieving its goal.
13 Minutes / From Logic Model to Evaluation Questions / Facilitator will begin this section with more about types of evaluation questions, followed by guidelines for good questions, and a brief opportunity to differentiate formative and summative evaluation questions. Facilitator will then introduce the idea of different audiences desiring different information about a program or policy, and therefore asking different questions. Participants will be introduced to a table that delineates different types of audiences, questions, and uses of evaluation.
Activity: The activity will have participants brainstorm their own formative or summative questions about their own program or policy.
25 Minutes / Generating Indicators
Identifying the Right Indicators
Quantitative and Qualitative Indicators / Facilitator will introduce the concept of indicators, and provide an overview of how indicators may be generated from the logic model, specifically from the activities and strategies and outcomes sections of the model. Facilitator will provide an example of this for the College Ready example. This section closes with a discussion of qualitative and quantitative indicators and the use and value of both types of measures in an evaluation.
Activity: The first activity will have participants practice generating indicators based on an example given. The second activity will have participants practice generating both quantitative and qualitative indicators, based on one of the cases or their own example.
15 Minutes / Building an Evaluation Design / Facilitator begins this section with the question, “What type of evaluation is right for you?” Specifically, purpose, audience, capacity, and priority must be considered. Facilitator then transitions to more discussion about data collection, specifically considering the types of data available to participants and the best data to answer specific questions. Types of data, both quantitative and qualitative are reviewed. Then facilitator introduces the data collection framework tool, which outlines the outcomes of interest, data sources, responsible parties and timeline. This is followed by discussion of evaluation design, as distinct from the data collection framework.
Activity: Participants will practice by brainstorming their own data sources across different types of data.
6 Minutes / Putting it all Together / This final section of the workshop opens with discussion of an evaluation prospectus or overview, and the key questions to consider, when generating this short document, which can serve as the “calling card” for an evaluation, either for potential funders or for potential evaluators. The facilitator will close with presentation of a Gantt Chart as a useful tool for managing an evaluation and considering realistic timelines and deliverables.
3 Minutes / Closing and Thank You / Facilitator closes workshop with thank you and invitation to be in touch with further questions.

Session II Purpose

The purpose of the Second Workshop is to demonstrate how logic models may be used as a tool specific to developing a program or policy evaluation. The workshop will:

Reintroduce logic models as an effective tool, specifically for evaluation;

Invite participants to practice using logic models to develop evaluation questions and indicators of success;

Provide guidance in how to determine the appropriate evaluation for a specific program or policy.

Pre-Work Assignment

Based on the work in the first session, participants may come to the workshop with a draft logic model for a particular program or policy.

Directions: A sample logic model template is provided at the back of the workbook and may be used to generate a simple logic model. Participants may use this logic model to guide their work in the session.

Reviewing Logic Models

Please refer to the example logic models in the back of the workbook to remind yourself of the elements of a logic model. Here are a few quick reminders about what a logic model is, and what it isn’t. A logic model is:

A graphic representation of the theory of change driving a program or policy;

A framework for planning, implementation, and evaluation.

A logic model is not:

A strategic plan;

An evaluation design.

While a logic model is not a strategic plan or an evaluation design, it can be useful in developing either of these more detailed resources. The focus of this workshop is on the latter—how does a logic model support the development of an evaluation plan for a program or policy?

Case Examples Revisited

Case Study #1: College Readiness High School Program

College Ready is a school-based college access program for 9th-12th grade students. Students are identified for the program based on Free and Reduced Lunch status, recommendations from school guidance counselors, and/or recommendations from 8th grade English and Math teachers. Students participate in monthly meetings as a group with the College Ready staff, are provided with one-on-one counseling with College Ready staff, are assigned an adult mentor and a peer mentor, and participate in a series of evening and summer workshops. In addition, families make a commitment to the program and attend a series of workshops specifically designed to prepare the whole family for the college application process. The goal of the program is to significantly increase college attendance among low-income students.

Case Study #2: Redesigning a District’s Educator Evaluation Process

A school district wants to review and update the teacher evaluation process they have used for more than 10 years. The new system must reflect the new state guidelines for evaluation, which include a requirement for multiple measures, including a student learning measure. However, much is left to the district to decide about how decisions will be made, what measures to use, who will conduct the evaluations, and how the evaluation process will be managed and supported. The district has determined, in keeping with state guidelines, that the new evaluation will assess teachers’ professional practice and their impact on student learning. The district leadership would like the system to be supported by teachers, and they would like it to effectively differentiate among teachers, support teachers’ ongoing professional growth, lead to improvements in teacher practice, and ultimately positively influence student learning.

Guidelines for Reviewing a Logic Model

Participants may have created draft logic models for a program or policy they are engaged in or considering in their work. The logic model drafts will serve as the template to guide examples throughout this session.

Discuss: If the group is small, one sample from a participant will be shared electronically, and the participant will walk the group through the model. If the group is too large to do as one group, the participants may be divided into breakout rooms (virtually or face-to-face) to discuss the questions below. A facilitator and note-taker will be assigned to the groups. Summarizing questions and comments will be brought back to the large group.

The facilitator may ask the following questions to guide discussion:

  • What elements of the logic model were hardest to develop?
  • Is the problem statement the “right grain size”?
  • Within the strategies and activities, did you identify overarching strategies?
  • What assumptions did you uncover?
  • What is the timeframe for your outcomes? What are the impacts?
  • What was your process for developing the model?
  • What requires further explanation or discussion?

Introducing Evaluations

Program and policy evaluation helps to answer important questions that inform our work. At a basic level, evaluation answers the questions: Are we successful? Have we had an impact? What exactly is making the difference?

More specifically, evaluations ask questions such as:

Is the program or policy effective?

Is the program or policy working as intended?

What aspects of the program are working? What aspects are not working?

High-quality evaluation is designed to support your work, inform what you do, and enhance your impact. To do so, evaluation should be considered at the onset of program and policy design, ideally when the logic model is being developed. In other words, as a program or policy is conceived, evaluation of the same program or policy should be a part of the conversation.

Questions like:

How will we know if we’re successful?

What do we anticipate to be the impact of this policy?

What do we think the most influential aspects of the program will be?

All of these questions suggest directions for evaluation. Do not wait until the program or policy is in the midst of implementation to begin to consider these questions and how to answer them. Invest early in considering these questions and designing an evaluation that will help to answer them. It may also be helpful to involve others, including staff and participants, in helping to plan the evaluation.

Activity: How will I know?

Consider your own program or policy. How will you know if one or more of your strategies have been successful? Take a moment to think about what might be some ways you will know your efforts have yielded the results you hope to achieve.

Types of Evaluations

The purpose of this section is to provide an overview of the types of evaluations and their different purposes.

Improve–These are formative, process, or implementation evaluations.

Prove – These are summative, results, or outcome evaluations.

Most evaluation questions emerge out of the strategies and outcomes sections of the logic models. You want to know about the strategies you’re trying and how they’re going, and you want to know about outcomes and impact.

Generally, evaluations that focus on strategies (and outputs) are the process evaluations, or the evaluations that are designed to help guide changes or improvements. Those evaluations that focus on the outcomes in the logic model are generally summative evaluations or those designed to prove the value, merit, or impact of the program or policy.

There are generally four types of evaluations:

Needs assessment (IMPROVE)
This type of evaluation determines what is needed (at the onset) and helps set priorities (e.g., what is needed to increase students’ college attendance?). These types of evaluations are often designed in order to help to create or build a program or policy, so a logic model might be developed after the needs assessment. In fact, the needs assessment might provide information that helps to clarify the problem to which the program or policy is designed to respond.

Process/Formative evaluation (IMPROVE)
This type of evaluation is one that examines what goes on while a program is in progress. The evaluation assesses what the program is, how it is working, whom it is reaching, and how (e.g., are participants attending as anticipated?).

Outcome evaluation (PROVE)
This type of evaluation is designed to determine what results from a program, and its consequences, generally for the people most directly affected by the program (e.g., did participants increase their knowledge; change attitudes, behavior, etc.?)

Impact evaluation (PROVE)
This type of evaluation determines the net causal effects of the program beyond its immediate results. Impact evaluation often involves a comparison of what appeared after the program with what would have appeared without the program. These evaluations often include comparison groups, interrupted time series, or other designs that allow evaluators to capture what did happen to the target compared to what would have happened without the program (e.g., achievement scores; acceptance rates; etc.).

From Logic Models to Evaluation Questions

The purpose of this section is to make the connection between the logic model and development of appropriate evaluation questions, using the logic model as a basis for developing these questions. The first step in making the transition from the logic model to a potential evaluation is to consider the questions that are derived from the model that you may want answered.

Developing Evaluation Questions

As noted above, some questions ask about improvements to the program or policy (Formative/Process/ Implementation/Improve questions), while others ask about the results or impacts (Summative/Outcome/Prove questions). Generally:

Formative questions are asked while the program is operating and are for purpose of program improvement;

Summative questions are asked at completion or after the program and are for purpose of determining results and assessing value.

Regardless of the type of questions, there are some guidelines to consider for all evaluation questions.

Can the question be answered given the program? One of the main reasons for building a logic modelas a part of program evaluation is to determine what questions are appropriate based on the program. By describing what the program is, the logic model helps determine what is appropriate to evaluate.

Are the questions high-priority?Try to distinguish between what you need to know and what might merely be nice to know. What are the key, most important questions? For whom? Why?

Are the questions practical and appropriate to the capacity you have to answer them? Consider time, resources, and the availability of assistance needed to answer the questions. As appropriate, bring stakeholders together and negotiate a practical set of questions. Remember, it is better to answer a few questions thoroughly and well.

Are the questions clear and jargon free? Apply the “Great Aunt Lucy test.” Would someone, like your Aunt Lucy or anyone who is not steeped in the language of your particular field understand the question? Avoid the use of jargon or vague words that can have multiple meaning. Always define key terms so that everyone understands the meaning.

Activity: Formative and Summative Evaluation

Now, try it yourself. Come up with a formative and summative evaluation question for the college ready case or for a program or policy from your own work.

  • Formative evaluation:

Topic: College Ready or ______

Question:

  • Summative evaluation

Topic: College Ready or ______

Question:

Considering the Audience

Another key aspect of developing good evaluation questions is to consider different audiences, or the different stakeholders for a program and policy, the different types of questions they might have, and how they would use the answers to these questions (e.g. what decisions would result from answers).

This sample chart outlines some traditional audiences, the types of questions they are likely to have, and how they might apply answers to these questions to make decisions. (Source: Kellogg Foundation Logic Model Handbook, 2006)

Audience / Typical Questions / Evaluation Use
Program Staff / Are we reaching our target population (e.g. high school students; low-income families with pre-school age children)?
Are participants in the program engaged? Satisfied?
Is the program being run well?
How can we improve the program? / Day-to-day program operations; changes in program design and delivery
Participants / Did the program help me? Help others? How could the program better serve my needs? How could I get more out of the program? / Decisions about participation/value to them
Public Officials / Who does the program serve?
Is it reaching the target population?
What difference does the program make?
Are participants engaged and satisfied with the program?
Is the program cost-effective? / Decisions about support, commitment, funding, scale-up/duplication
Funders / Is the program meeting its goals?
Is the program worth the cost? / Decisions about ongoing funding; accountability

Activity: Generating Questions for Different Audiences

Think about your own context and consider:

(1) Audience: Who are the different members of each stakeholder group (staff, participants, etc.)?

(2) Questions: What questions might different stakeholders have about the program or policy?