The Heller School for Social Policy and Management

Brandeis University

Sustainable International Development Graduate Program

HS 315 Advanced Monitoring and Evaluation Issues in Practice

Spring 2015, Module II

Fridays, 9:00-11:50, Rm TBD

Laura Roper, Ph.D.

Office Hours TBD

University Notices:

1.  If you are a student with a documented disability record at Brandeis University and wish to have a reasonable accommodation made for you in this class, please advise me immediately.

2.  You are expected to be honest in all of your academic work. The University policy on academic honesty is distributed annually as section 5 of the Rights and Responsibilities handbook. Instances of alleged dishonesty are subject to possible judicial action. Potential sanctions include failure in the course and suspension from the University. If you have any questions about my expectations, please ask.

Academic integrity is central to the mission of educational excellence at Brandeis University. Each student is expected to turn in work completed independently, except when assignments specifically authorize collaborative effort. It is not acceptable to use the words or ideas of another person—be it a world-class philosopher or your roommate—without proper acknowledgement of that source. This means that you must use footnotes and quotation marks to indicate the source of any phrases, sentences, paragraphs or ideas found in published volumes, on the internet, or created by another student. If you are in doubt about the instructions for any assignment in this course, you must ask for clarification.

______

Course Description

Monitoring and evaluation is meant to contribute to better development practice. Yet, in many respects, the sector has a learning deficit – why is it that the same mistakes are made during large scale humanitarian response; that organizations fail to collect gender disaggregated data and can’t identify the differential impact interventions have on men and women; that programs cannot identify whether they have an implementation problem or a theory problem? In this class we approach M&E from a strategic perspective and co-strategize on how best to institutionalize good M&E practice that really makes a difference in development outcomes. You will become more familiar with different approaches to evaluation; sensitive to ethical, cultural issues and political issues that affect the quality of evaluations; and look at a number of monitoring and evaluation challenges and examples of how to address them.

Core Competencies

Conceptual

üFamiliarity with ethical standards for evaluators and their practical application

üAbility to identify the best evaluation approach for a given program

üDeeper understand of the links between theory of change, strategic planning, implementation and evaluation and how all to address breakdowns in the cycle.

üUnderstanding the challenges of cross-cultural evaluation and strategies to address them

ü Understanding the challenges of reaching and gathering date from hard to reach/at risk populations and how to address them.

ü Integrating a gender perspective in M&E design.

ü How to address organizational learning disability, based on experience in the humanitarian sector.

Sustainable Development Statement

In a resource constrained world, good M&E is essential for making good investment decision. Although staff is sometimes resistant to or anxious about evaluation, it can often be positive for morale, as advances are recognized and staff get support and direction for program improvements. For the development and humanitarian sectors as a whole, cross-institutional learning is essential for identifying programs and organizations that are environmentally, economically and socially sustainable if we are going to address global problems on a global scale.

Gender Statement

The vast majority of evaluations, unless specifically designed to look at gender issues, fail to do so. There is an unfortunate tendency to assume that if a community or family unit is better off, everyone in that community or family benefits. Given the numerous and well-documented unintended impacts of many development programs on women, from eliminating their access to subsistence plots when the promotion of cash crops takes more land to disrupting important social rituals and exchanges when such things as wells or labor-saving devices, it is essential that the class learn techniques for building gender considerations into evaluations.

Course Requirements

Attendance in all sessions and prompt arrival to class

Preparation of all readings

Participation in class discussions and small group work

Completion of an in-class quiz

Presentation in an ideas market place

Final paper on evaluation issues, resources and approaches on a theme of your choosing.

A Paper on METHODOLOGY:

1. Pick a theme – such as gender analysis in humanitarian response or evaluation of environmental restoration programs – and identify existing guides for planning, monitoring and evaluation, as well as case examples, and write a review covering key strategic issues evaluations have or should address, evaluation challenges, M&E resources that might help you address the evaluation challenges, including examples of evaluation (good practice). (That is, it is not a paper on whether programs are good or not, but how as an evaluator you would go about determining if programs are effective).

2. Pick a particular approach – like experimental design, outcome mapping, developmental evaluation – and write a paper covering its basic elements, when it is most appropriate to use, when it shouldn’t be used. Your discussion should be illustrated by documented examples of its use in evaluating programs from the literature (academic or grey) and the appropriateness of its use in each case, with a discussion any weaknesses or concerns.

3. Pick a program and design an evaluation, after researching existing tools and guides, identifying other examples of evaluations of similar programs, and develop full evaluation plan, including what monitoring systems and data you would want to have in place to facilitate strategic and cost-effective evaluation. You’ll start the design with a background section on the program, and what you’ve learned about how such programs are evaluated. You will then do a full design proposal.

Course Grading

Participation in discussions and group work– 10%

Demonstrated knowledge of readings – 15%

class quiz - 15%

Ideas Marketplace - 10%

Final Project – 50%

All readings can be found on LATTE

Additional resources you may want to reference are:

www.unicef.org/evaldatabase

www.worldbank.org (click onto learning on upper right hand side)

www.ids.ac.uk/info/index.html

www.alnap.org (Active Learning Network for Accountability and Performance)

www.isnar.cigar.org/gender/evaluation.htm

www.idrc.org

www.surveynetwork.org

Class Curriculum and Syllabus

Week 1 (March 13) Strategic and Ethical Considerations in Evaluation Choice

Theme for the week: This week we will examine the implications of different methodological choices both in terms of the ways in which an evaluation may contribute to better project, program or strategy design and also the role of the evaluator. We will also look at ethical standards in evaluation and discuss whether there are ethical absolutes or whether ethics are situational depending on the type of methodology being employed and/or the context of the program. Given the short 7-week time frame for the course, please come prepared to the first class with as many readings as possible. At least make every effort to read highlighted items as they will be discussed in class.

The Power of a Well-Designed Evaluation

Jennifer Mandel, “Closing the Loop-Responding to People’s Information Needs from Crisis Response to Recovery to Development: A Case Study of Post Earthquake Haiti.

Ethics and Evaluation

American Evaluation Association, “Guiding Principles for Evaluators,” ratified in 2004. Found at http://www.eval.org/Publications/aea06.GPBrochure.pdf

Interaction, “Data Collection in Humanitarian Response: A Guide for Incorporating Protection,” [2004?]

Evaluator Roles

Gary Skolits, et al. “Reconceptualizing Evaluator Roles.” American Journal of Evaluation 30(3), September 2009.

Carole Weiss, “Where Politics and Evaluation Meet.” Evaluation Practice 14(1), 1993

Week 2 (March 20) Getting to the Truth

Theme for the Week: There are many obstacles to getting to the truth about social phenomenon from poor conceptualization of a problem, to lack of data or unreliable data, to matters of interpretation. There is even debate whether there is an objective “truth” out there or whether facts and their meanings have to be negotiated. This week we will look at some of the perspectives on this topic, as these are healthy debates that are influential in shaping evaluation demand. What is not health is when sloppy thinking or ideology trumps a commitment to evidence-based analysis as this presents two of the biggest barriers to improving development practice. This week’s discussion should help you be both strategic and pragmatic in picking the best methods for understanding people’s experience with development interventions and identifying the strategic issues that need to be examined.

Different Perspectives; Different Realities

USAID and GCM, Research Rashomon: Lessons from the Cameroon Pre-exposure Prophylaxis Trial Site, Washington, DC, 2008 (?)

Anna Marie Madison, “Language in Defining Social Problems and in Evaluating Social Programs,” New Directions in Evaluation no. 86, Summer 2000, p 17-28. [TOC; participatory mapping of stakeholders; theory of action]

Attribution, Contribution and Sense-Making

Howard White, “An Introduction to the use of randomized control trials to evaluate development interventions.” International Initiative for Impact Evaluation, February 2011.

Elizabeth Dozois, et al, “DE 201: A Practitioner’s Guide to Developmental Evaluation.” the JW McConnell Foundation and International Institute for Child Rights and Development, 2010. Read p. 9-21, 25-48.

Week 3 (March 27) Cross-Cultural Evaluation and Evaluation of Hard to Reach Populations

NOTE: We’ll begin this week with an in-class quiz covering key concepts and issues introduced in the first two weeks. You will need to demonstrate understanding of what was discussed in class and will, in part, be graded based on demonstrated familiarity with the reading.

As practitioners in international development, it is very likely that you will at some point, if not regularly, work in cultures other than your own. Even within a country, there can be many subcultures based on ethnicity, geography, and self-identification (e.g. survivors of genocide, LGBT, etc.). Unreliable results and poor program and policy decisions can result from research/evaluation that is not sensitive to cultural nuances. This week we will begin by discussing the cultural dimensions of evaluation and whether and how you should “deal with” culture or explore the cultural dimensions of development interventions. We will then begin a discussion of the challenges associated with evaluating programs with a particularly vulnerable population – displaced victims of gender-based violence.

Cross-Cultural Evaluation

The Colorado Trust, “The Importance of Culture in Evaluation: A Practical Guide for Evaluators,” June 2007. Found at: http://www.thecoloradotrust.org/attachments/0000/2208/CrossCulturalGuide.r3.pdf

Annika Launiala, “How much can a KAP survey tell us….” Anthropology Matters 11(1)2009.

Resources

Janet Harkness, “Adaptation” and “Translation” in Guidelines in Best Practice in Cross-Cultural Surveys, p VII 1- 19 (p. 313-322 in pdf) and p. VIII 1-34 (315-331) from Guidelines in Best Practice in Cross-Cultural Surveys, Survey Research Center, Institute for Social Research, University of Michigan, 2010, p. III-1-35 (p 89-125 in pdf). (This manual is probably the most exhaustive guide to designing surveys available, from University of Michigan which is famous for its survey research.)

Gender-Based Violence

Beth Vann, Gender-Based Violence: Emerging Issue in Program Serving Displace Populations. JSI Research and Training Institute, September 2002. Chapters 3 as needed for general background (gbv van1), Chapters 7 and 8 (gbv van2).

Extra Credit: Create a scenario in which you are administering a survey that deals with a sensitive/controversial issue or (or if you’re feeling ambitious, and) is administered to people of different cultural backgrounds (limit it to two). Design a 20-30 item survey.

Week 4 (April 17) Hard to Reach Populations, cont’d and Assessing Value for Money

Populations most in need are often the ones hardest to reach, particularly if they are transient or subject to abuses that have adverse psychological impacts. Evaluation, if done well, can not only be a means of identifying the dimensions of a problem and identifying what interventions work, but also be a means of giving voice to the voiceless. We are going to continue our examination M&E with victims of GBV and consider the appropriate forms of systematic inquiry with other vulnerable and hard to reach populations. We are then going to shift gears and talk about the growing emphasis on demonstrating value for money (the term DFID uses) or Social Return on Investment (SROI – the more common term used in the US).

Gender-Based Violence, cont’d

Example:

Thomas McHale, et al, “’Every Home Has Its Secrets’” A Mixed-Methods Study of Intimate Partner Violence, Women’s Empowerment and Justice on Idjwi Island, DRC.” Harvard School of Public Health, April 2011.

Cari Clark, Gender-Base Violence Research Initiatives in Refugee, Internally Displaced, and Post Conflict Settings: Lessons Learned. Done for the Reproductive Health for Refugees Consortium. April 2003

Resource:

USAID, Violence against Women and Girls: A Compendium of Monitoring and Evaluation Indicators, October 2008. Chapter 2; then pick one or two of the categories of indicators (i.e. Intimate partner violence, female genital cutting/mutiliation) and read about three indicators.

Value for Money/SROI

Context International Cooperation, “Social Return on Investment: A practical guide for the development cooperation sector,” October 2010, p. 14-46.

PACT, “Using Social Return on Investment for Evaluating an Advocacy Program in the Ukraine,” [2012}

Resource

The SROI Network, “A Guide to Social Return on Investment,” January, 2012 for more detailed guidance.

Week 5 (April 24) Evaluation in the Humanitarian Sector

As your careers develop, you’ll be expected to get beyond project evaluation, to much more sophisticated and complex evaluations. These evaluations will involve multiple stakeholders, will have operating contexts that create challenging program constraints, and the program may have multiple moving parts – all of which create challenges for evaluators. One of the most complicated contexts for implementation and evaluation is humanitarian emergencies. This is a sector that has invested a lot in the last decade in improving its evaluation practice.

Karen Proudlock, et al. “Improving humanitarian impact assessment: bridging theory and practice,” Chapter 2, 2009 Review of Humanitarian Action, ALNAP, 2009.

Paul Knox-Clarke and John Mitchell, “Reflections on the Accountability Revolution,” Humanitarian Exchange, Number 52, October 2011: 3-5 and Jonathan Potter, “Accountability – don’t forget your staff,”: 15-18.