Evaluation An Overview
Chapter 13

HS 490

Planning and Implementing

•Whether they realize it or not, program planners are constantly evaluating their health promotion efforts by asking questions such as: Did the program have an impact? How many people stopped smoking? Were the participants satisfied with the program? Should we change anything about the way the program was offered? What would happen if we just changed the time we offer the program? Should we expect a greater turnout than what we got tonight? Although all these questions are linked to evaluation, some will be of greater importance than others.

Informal Evaluation

•“Impromptu unsystematic procedures”

•Such evaluation processes are used when making small changes in programs, such as changing the time of the program, adding an additional class selection, consulting colleagues about a program concern, or making program changes based on participant feedback.

•Characterized by an absence of breadth and depth because they lack systematic procedures and formally collected evidence. As humans, we are limited in making judgments by both the lack of opportunity to observe many different settings, clients, or students and by our own past experiences, which both informs and biases our judgments. Informal evaluation does not occur in a vacuum. Experience, instinct, generalization, and reasoning can all influence the outcome of informal evaluations, and any or all of these may be the basis for sound, or faulty judgments.

Formal Evaluation

•Process are characterized by “systematic well-planned procedures”

•They are processes that are designed to control a variety of extraneous variables that could produce evaluation outcomes that are not correct.

•Evaluation is critical for all health promotion programs. It is the only way to separate successful programs from those that are not; it is a driving force for planning new effective health promotion programs, improving existing programs, and demonstrating the results of resource investments.

Table 13.1 Characteristics of Formal and Informal Evaluation

Evaluation

•Without adequate evaluation, accurate information is not gained, and decisions are based on speculation.

•Evaluation begins when the program goals are objectives are being developed. Evaluation will not only help determine whether program goals and objectives are met but it will also answer questions about the program as it is being implemented.

Stakeholders

•Those individuals who have a vested interest in the program.

•Program planners, administrators, program facilitators, and the representatives from the funding source all have specific questions they would like answered regarding the program’s development and outcome.

•Program planners may want to know if the program met the needs of the target population.

•Program administrators may want to know if the program is making any money.

•Program facilitators may want to know if participants changed their behavior as a result of the program.

•Representatives from the funding source may be interested in knowing if the program was cost effective.

•These questions can all be answered if the evaluation is properly planned and implemented.

•Evaluations may be political. Results may be intentionally skewed by reporting only success and not weaknesses to decision makers in order to prevent the elimination of a program.

•An evaluation can be a political “hot potato.” For example, if objective results lead to the recommendation that a drug abuse program serving poor, pregnant minority women be eliminated, what re the implications for the agency with regard to morale, racial harmony, and trust in governmental agencies?

Basic Terminology

•Evaluation- (Evaluand) A variety of definitions of the term evaluation have been written; most include the concept of determining the value or worth of the objective of interestor evaluand, (the health promotion program) against a standard of acceptability.

• Standards of acceptability – are the minimum levels of performance, effectiveness, or benefits used to judge the value, and are typically expressed in the “outcome” and “criterion” components of a program’s objective.

Table 13.2 Standards of Acceptability

Basic Terminology

•Process evaluation- Any combination of measurements obtained during the implementation of program activities to control, assure, or improve the quality of performance or delivery.

•Impact evaluation- focuses on “the immediate observable effects of a program, leading to the intended outcomes of a program; intermediate outcomes.” Measures of awareness, knowledge, attitudes, skills and behaviors yield impact evaluation data.

•Outcome evaluation- focuses on “an ultimate goal or product of a program or treatment, generally measured in the health field by morbidity or mortality statistics in a population, vital measures, symptoms, signs, or physiological indicators on individuals.

Basic Terminology

•Formative evaluation- “any combination of measurements obtained and judgments made before or during the implementation of materials, methods, activities or programs to control, assure or improve the quality of performance or delivery.” Examples include a needs assessment, pretesting a target population, or pilot testing a program.

•Summative evaluation- “any combination of measurements and judgments that permit conclusions to be drawn about impact, outcome, or benefits of a program or method.”

Purpose for Evaluation

•Basically, programs are evaluated to gain information and make decisions. The types of evaluation are distinguished by how the information is going to be used. The information may be used by program planners during the implementation of a program to make improvements in services (process evaluation). It may be used to see if certain immediate outcomes- such as knowledge, attitude, skills, and behavior change- have occurred (impact evaluation). It may also be used at the end of a program to determine whether long-term goals and objectives have been met (outcome evaluation).

Six general reasons why stakeholders may want programs evaluated:

1. To determine achievement of objectives related to improved health status: probably the most common reason for program evaluation is to determine if objectives of the program have been met. Evaluation for this reason may also be used to determine which of several programs was most effective in reaching a given objective.

2. To improve program implementation: Program planners should always be interested in improving a program. Through program evaluation, weak elements can be identified, removed, and replaced.

Six general reasons why stakeholders may want programs evaluated:

3. To provide accountability to funders, community, and other stakeholders: Many stakeholders are interested in the value of a program to a community, or if the program is worth its cost. Thus, an evaluation may provide decision makers with the information to determine if the program funding should continue, discontinue, or expand.

Six general reasons why stakeholders may want programs evaluated:

4. To increase community support for initiatives: The results of an evaluation can increase the community awareness of a program. Positive evaluation information channeled through the media can help sell a program, which in turn may lead to additional funding.

Six general reasons why stakeholders may want programs evaluated:

5. To contribute to the scientific base for community pubic health interventions: Program evaluation can provide findings that can lad to new hypotheses about human behavior and community change, which in turn may lead to new and better programs.

Six general reasons why stakeholders may want programs evaluated:

•6. To inform policy decisions: Program evaluation data can be used to impact policy within the community. For example, a number of communities have passed local ordinances based on the results of evaluative studies on secondhand smoke.

The Process for Evaluation

•The process of evaluating a program or activity begins with the initial program planning. The following list provides guidelines for planning and conducting an evaluation:

Planning

•Review the program goals and objectives.

•Meet with the stakeholders to determine what general questions should be answered.

•Determine whether the necessary resources are available to conduct the evaluation; budget for additional costs.

•Hire an evaluator, if needed.

•Develop the evaluation design.

•Decide which evaluation instrument(s) will be used and, if needed, who will develop the instrument.

•Determine whether the evaluation questions reflect the goals and objectives of the program.

•Determine whether the questions of various groups are considered, such as the program administrators, facilitators, planners, participants, and funding source.

•Determine when the evaluation will be conducted; develop a time line.

Data Collection

•Decide how the information will be collected: survey, records and documents, telephone interview, personal interview, observation.

•Determine who will collect that data. (internal evaluation, external evaluation, or evaluation consultant)

•Plan and administer a pilot test.

•Review the results of the pilot test to refine the data collection instrument or the collection procedures.

•Determine who will be included in the evaluation- for example, all program participants, or a random sample of participants.

•Conduct that data collection.

Data Analysis

•Determine how the data will be analyzed.

•Determine who will analyze that data (often need to use a statistician).

•Conduct the analysis, and allow for several interpretations of the data.

Reporting

•Determine who will receive the results.

•Choose who will report the findings.

•Determine how (in what form) the results will be disseminated.

•Discuss how the findings of the process or formative evaluation will affect the program.

•Decide when the results of the impact, outcome, or summative evaluation will be made available.

•Disseminate the findings.

Application

•Determine how the results can be implemented.

Practical Problems in Evaluation

•Certain problems may exist that may impede an effective evaluation.

–The planner failed to build evaluation into program planning.

–Adequate procedures cost time and resources.

–Changes sometimes come slowly.

–Some changes do not last.

–It is often difficult to distinguish between cause and effect.

–Conflict can arise between professional standards and do-it-yourself attitudes.

–Sometimes people’s motives get in the way.

–It is difficult to properly evaluate multilevel interventions.

•Examples of these problems in health promotion programs include not collecting initial information from participants because evaluation plans were not in place, failing to budget for the cost of the evaluation (e.g., printing questionnaires, additional staff, postage), or conducting the evaluation before a change can occur (e.g., changes in cholesterol level) or too long after program completion (e.g., long-term effects of a weight loss program).

Evaluation in the Program-Planning Stages

•The evaluation design must reflect the goals and objectives of the program. To be most effective, the evaluation must be planned in the early stages of program development and must be in place before the program begins. Results from evaluations conducted early in the program-planning process can assist in improving the program. Having a plan in place to conduct an evaluation before the end of a program will make collecting information regarding program outcomes much easier and more accurate.

Baseline Data

•Data reflecting the initial status or interests of the participants baseline or data from a needs assessment can be used for comparison to the early data collected from program participants.

•Early data regarding the program should be analyzed quickly to make any necessary adjustments to the program.

•By developing the summative evaluation plan at the beginning of the program, planners can ensure that the results will be less biased. Early development of the summative evaluation plan ensures that the questions answered reflect the original objectives and goals of the program.

Who will conduct the evaluation?

•At the beginning of the program, planners must determine who will conduct the evaluation. The program evaluator must be as objective as possible and should have nothing to gain from the results of the evaluation. The evaluator may be someone associated with the program or someone from outside.

Internal Evaluation

•Someone trained in evaluation who is personally involved with the program conducts the evaluation.

•An internal evaluator would have the advantage of being closer to the program staff and activities, making it easier to collect the relevant information.

External Evaluation

•Someone conducted by someone who is not connected with the program. Often an external evaluator is referred to as an evaluation consultant.

Evaluation Consultant

•This type of evaluator is somewhat isolated, lacking the knowledge and experience of the program that the internal evaluator possesses. Evaluation of this nature is also more expensive, since an additional person must be hired to carry out the work. However, external evaluation can provide a more objective outlook and a fresh perspective, and it helps ensure an unbiased outcome evaluation.

Figure 13.1 Characteristics of a Suitable Consultant

•Is not directly involved in the development or running of the program being evaluated.

•Is impartial about evaluation results (i.e., has nothing to gain by skewing the results in one direction or another.

•Will not give in to any pressure by senior staff or program staff to produce particular findings.

•Will give the staff the full findings (i.e., will not gloss over or fail to report certain findings for any reason.

Figure 13.1 Characteristics of a Suitable Consultant

•Has experience in the type of evaluation needed.

•Communicates well with key personnel.

•Considers programmatic realities (e.g., a small budget) when designing an evaluation.

•Delivers reports and protocols on time.

•Relates to the program.

Figure 13.1 Characteristics of a Suitable Consultant

•Sees beyond the evaluation to other programmatic activities.

•Explains both the benefits and risks of evaluation.

•Educates program personnel about conducting evaluation, thus allowing future evaluations to be done in house.

•Explains material clearly and patiently.

•Respects all levels of personnel.

•Whether an internal or external evaluator conducts the program evaluation, the main goal is to choose someone with credibility and objectivity. The evaluator must have a clear role in the evaluation design, accurately reporting the results regardless of the findings.

•The question of who will receive the evaluation results is also an important consideration. The evaluation can be conducted from several vantage points, depending on whether the results will be presented to the program administrator, the funding source, the organization, or the public. These stakeholders may all have different sets of questions they would like answered. The evaluation results must be disseminated to groups interested in the program.

•The planning process of the evaluation should include a determination of how the results will be used. It is especially important in process and formative evaluation to implement the finding rapidly to improve the program. However, an action plan is needed in summative, impact, and outcome evaluation to ensure that the results are not filed away, but are used in the provision of future health promotion programs.

•Evaluation can be thought of as a way to make sound decisions regarding the worth or effectiveness of health promotion programs, to compare different types of programs, to eliminate weak program components, to meet requirements of funding sources, or to provide information about programs. The evaluation process takes place before, during, and after program implementation. If the evaluation is well designed and conducted, the findings can be extremely beneficial to the program stakeholders.