1
EVALUATING AN EDUCATION PROGRAM
(How to Demonstrate the Power of Education)
Presenter: Debra Hall, PhD, RN, CCRN
Director, Educational Development
Baptist Health Lexington
(859)260.5408
Objectives:
The participant will
(1)Differentiate between levels and models of evaluation
(2)Identify specific methods of educational program outcomes-based evaluation
(3)Apply methods of analyzing and interpreting outcomes-based evaluative data on sample data
(4)Participate in a work group activity to identify an appropriate method(s) to use for outcomes-based evaluation of a specific educational program
(5)Develop a draft action plan for evaluation of an educational program, including timeline
Evaluate -
- To ascertain or fix the value or worth of.
- To examine and judge carefully; appraise; make a judgment.
- Mathematics To calculate the numerical value of; express numerically.
Evaluation =a judgment based on systematically collected data; determines the worth or merit of an object (whatever is being evaluated); uses inquiry and judgment methods
Goal of Evaluation - to influence decision-making or policy development by providing “hard” data
Significant changes in Healthcare and reimbursement –These changes have led to more emphasis on the need to justify programs and to have a variety of skills in one’s repertoire (clinical/educational/business). Reimbursement is tied to patient outcomes and decreased length of stay, and elimination of healthcare associated infections and injuries (e.g., CAUTI, CLABSI, patient falls). In addition, there is an increase in regulatory and accrediting body requirements.
Changes in Education -
One of the major trends in professional development is to research the outcome of professional development activities and demonstrate how they contribute to quality health care. Therefore, measuring outcomes of education is essential (Avillion, 2001).
Program evaluation –
- a systematic process which determines the success and value of educational programs
- systematically collected information about a program or some aspect of a program to make necessary decisions about the program
- what information is needed to make these decisions will influence the type of evaluation undertaken to improve the program, how the data will be collected, and how it will be analyzed and reported so that others can understand the information
- can include a variety of different types of evaluation, such as needs assessments, accreditation, cost/benefit analysis, effectiveness, efficiency, formative, summative, goal-based, process, outcomes, etc.
- the purpose of the program is key in determining the success of the program
***Planning the program/class/project includes planning the evaluation***
Purpose of program evaluation
- to demonstrate the value of education; understand, verify, or increase the effect of the program on students/clients/employees – were you able to achieve what you wanted to achieve using this method of education?
- to determine effectiveness/ascertain what elements are valid for future use; identify program strengths and weaknesses to improve the program; verify that you are doing what you think you are doing – were you able to achieve the change you wanted to achieve?
- to calculate cost/benefit or provide valid comparisons between programs to determine which is more effective – is this method worthwhile to use resources?
Basic distinction between typesof evaluation isbased on time frame (Scriven, 1967):
Formative evaluation-focus on actual process
Summative evaluation-focus on final product
Both types can employ any evaluation method.
A. Formative Evaluation
Formative evaluation is conducted to strengthen and improve the program/class/project/activity being evaluated. It is conductedwhile the program/activity/etc. is occurring and the results are used to decide how the program is delivered, or what form the program will take.
Emphasis is placed on assessment of the elements of the program. For example, a nursing inservice on documentation of patient ambulation would include an assessment of how nurses currently document and this would be included as part of the inservice.
Formative evaluation could be used to determine what should be included when developing a curriculum, to identify barriers to program success, or to evaluate a long-standing program and discover if the objectives or goals of the program are being achieved. Therefore, evaluation should focus on variables over which the program administrators have some control.
Includes:
Defining and identifying the scope of the problem – (Methods: brainstorming, focus groups, Delphi methods, stakeholder analysis, lateral thinking)
Conducting a needs assessment to determine who needs the program, how great is the need and what meets the need(Methods: analysis of existing data, sample surveys, interviews, qualitative research, focus groups)
Conceptualizing to define the program, the target audience and the possible outcomes (Methods: simulation techniques, flow charting, project planning)
Process-BasedEvaluation– geared to understanding how a program works – how it produces the results that it produces
Questions to ask yourself when designing an evaluation to understand and/or closely examine the processes in your program:
1. How do educators, participants, and their supervisors know that the program was
successful?
2. Did the participants achieve the program objectives?
3. What is required of educators to deliver the program?
4. How are educators prepared on how to deliver the program?
5. Did educators use appropriate skills?
6. How are participants selected for the program?
7. How do educators determine what will be included in the program?
8. Is the program being correctly delivered?
9. Is the program being delivered as intended?
10. Would the program benefit from alternative delivery methods?
11. What do educators, participants, and supervisors consider program strengths?
12. What typical complaints are heard from participants or their supervisors?
13. What do participants and/or their supervisors recommend to improve the program?
Methods: can use form/discussion to collect this type of data - qualitative and quantitative monitoring, on-line surveys, assessment of participants. For repeating programs, use cumulative data and trend for patterns.
Examples: student class achievement; student satisfaction; teacher observation of class (including participation); written assignments; small group participation; curriculum review; class observation by a trained observer; faculty/course evaluations; participant enrollment
Key points to remember with process evaluation: Is the program causing something to happen? Other events or processes may really be causing the outcome, or preventing the anticipated outcome. People usually choose to participate, so they self-select to be involved in a program or not. It may be the difference in people who participate compared to those who do not participate that makes the difference in the outcome.
useful if -
Programs are long-standing and have changed over time
Employees or customers report a large number of complaints about the program
Large inefficiencies are apparent in delivering the program
An accurate portrayal to outside parties of how a program truly operates is needed (e.g., for replication elsewhere)
B. Summative Evaluation
Summative evaluation is conductedand made public to examine the effects or outcomes of the activity and to facilitate decisions and judgments about the program’s worth or merit. Data is collected at the end of the activity, instruction, course, or program, or at a later time after the program or activity has concluded. The expectation is that there has been a change in performance, and it is crucial that the goals be clearly defined to indicate whether this change occurred. Can the participant apply classroom concepts in the clinical setting?
Includes:
Overall greater impact or effect of the program -Did the program or activity create
broader or unintended effects beyond the specific target(s)? (Methods: economic effects, qualitative methods).
What unintended effects did the program have? There may be unforeseen consequences of a program. Some consequences may be positive and some may be negative. Be prepared to measure what else the program may be doing.
Justifying the cost and the benefit ratio of the program -Is the program worthwhile compared to the cost?
Conduct a secondary analysis of existing data – analyze previously-collected data by looking at it in a different way.
Outcomes-Based Evaluation – focused on whether the program leads to the outcomes desired (participant benefits from the program); educational effectiveness of the program
Questions to ask yourself when designing an evaluation to determine if your program achieved the desired outcomes:
1. Did the program have any effects on the participants?
2. Did people learn the skills being taught?
3. Did participants stay in the program?
4. How well are the participantsapplying what they learned in day-to-day practice?
5. Did the program meet the needs of the participants and their supervisors?
General steps to accomplish an outcomes-based evaluation:
1. Identify the major outcomes to examine for the program under evaluation. Reflect on the overall purpose of the organization and ask what effect the program will have on healthcare providers or patients.
2. Choose the outcomes to examine, prioritize them and pick the top two to four most important outcomes to examine.
3. For each outcome, specify what observable measures/indicators suggest that outcomes are being achieved. This is often the most important and challenging step. It helps to have a "devil's advocate" during this phase of identifying indicators.
4. Specify a "target" goal of participants. For example: What number/percent of participants will achieve specific outcomes?
5. Identify what data are needed to show that the outcomes have been met, e.g., how many nurses in the target group completed the program?
6. Determine how information can be efficiently and realistically gathered. Use a combination of methods if possible to triangulate or verify the data and to find in-depth information.
7. Analyze and report the findings.
Methods: requires follow-up on participants - observational correlational methods for determining if desired effects occurred; experimental quasi-experimental designs to determine if effects can be attributed to the intervention. Changes in attitude and increasing sensitivity are the most difficult to evaluate. Attempt to identify and evaluate behavioral evidence of changes.
Goals-Based Evaluation – useful if there are concerns about a mismatch between what the program is expected to do and what it is able to do
Questions to ask yourself when designing an evaluation to see if you reached your goals:
1. How were the program goals and objectives established? Was the process effective?
2. What is the status of the program's progress toward achieving the goals?
3. Will the goals be achieved according to the timelines specified in the program implementation plan? If not, then why not?
4. Do personnel have adequate resources (money, equipment, facilities, training, etc.) to achieve the goals?
5. How should priorities be changed to put more focus on achieving the goals?
6. Should timelines be changed?
7. How should goals be changed? Should any goals be added or removed? Why?
8. How should goals be established in the future?
C. Norm referenced and criterion referenced evaluation (Glaser, 1963)
- norm referenced - compares one participant’s score with another participant’s score and requires that a benchmark be set; generally used when comparing test scores or clinical performance compared to a group norm
- criterion referenced – compare the participant’s score against the specific goal or objective that was set; focus is on meeting the objectives
D. Competency based evaluation (Abruzzese, 1992)
- assesses the participant’s ability to demonstrate expected behaviors as outlined by performance standards
The overall goal in selecting evaluation method(s) is to get the most useful information to key decision makers in the most cost-effective and realistic fashion.
Evaluation Models
- Represent how variables, items, or events to be evaluated are arranged, observed, or manipulated to answer the evaluation question.
- Help to clarify the relationship of the variable being evaluated
- Provide a systematic plan or framework for the evaluation
- Keep the evaluation on target
- Should be selected based on the demands of the evaluation questions, the context of the evaluation, and the needs of the stakeholders
First generation models were developed prior to WWI. Their focus was primarily on evaluating the participant’s performance (norm referenced).
A. Performance-Objective Congruence Model (Ralph Tyler)
- The learner’s behavioral objectives are the desired performance outcomes measured before, during, and after a program
- Limitations: Does not consider structural, contextual, or procedural factors that might affect participants;only quantifiable info can be included in this model
- Components of the model:
- What objectives or expected behaviors will the learner achieve?
- What teaching methods, experiences, etc. are needed to achieve objectives?
- What methods will be used to measure the student’s achievement of objectives?
Second generation models were developed after WWI. Their focus was primarily on evaluation of the curriculum by assessing the participant’s performance based on the written objectives.
B. Four Step Evaluation Process (Kirkpatrick, 1959; 1994)
- One of the most popular and well-known models of evaluation
- Reaction – reaction/satisfaction of participants
- Learning – the extent to which participants change attitudes, improve knowledge, and/or increase skill
- Behavior – extent to which changes in behavior occurs as a result of education
- Results – final results that occur as a result of program attendance (increased production, improved quality, decreased costs, reduced turnover, higher profits)
- Limitations – oversimplified view of training effectiveness that does not consider individual or contextual influences when evaluating education; assumes that positive reactions lead to greater learning more positive organizational results
Revised Model (Chyung, 2008)
Third generation models were developed after 1967. Evaluation focus was primarily on program goals and performance. Considered the evaluation model used.
C. Goal-free (Scriven, 1967)
- Measures all outcomes/effects of program; no specified objectives
D. Quality Assurance (Donabedian, 1969)
- Three part model
- Structure – internal and external supports for the plan and curriculum
- Process – implementation
- Outcomes – quality & extent to which the program achieves its goals
E. CIPP Model (Stufflebeam, 1971)
Four step model that focuses on providing ongoing evaluation for the purpose of decision-making and to improve the program; data are delineated, collected, and reported for decision making within each step of the model
This model focuses on a program's merit, worth, and significance and lessons learned.
oContext- identify target audience and determine needs to be met(goals, objectives, learner)
- Input- determine available internal/external resources, possible alternative strategies, and how best to meet identified needs
- Process- examine how well plan was implemented (activities, documentation, and events)
- Product-examine results obtained, whether needs or goals were met, what planning for future requires
Limitations: Seeks to define the measures of input and output; does not take into account intervening variables
F. Discrepancy Evaluation (Porvus, 1971)
- 5stage model that assesses individual programs for inconsistencies using pre-set criteria or standards
- Defines program
- Install program criteria/standards
- Process
- Determine product or outcomes
- Cost-benefit - analyze results for decision making regarding program effectiveness, continuation, cancellation
G. Countenance Evaluation(Robert Stake, 1972; based on Tyler’s model)
- Type of qualitative/anthropological model whose name refers to the two faces of evaluation: description judgment made within a specific context
- Emphasizes the value of subjective human interpretation of the observations made
- Data are collected and distributed among 3 areas:
- Antecedents (objectives) – characteristics of students, teachers, curriculum, facilities, materials, organization, community
- Transactions (processes) - educational experiences
- Outcomes (product) - abilities, achievements, and attitudes resulting from educational experience
- Once data is collected, two matrices are used for program evaluation
- The descriptive matrix compares intended and actual antecedents, transactions, and outcomes of the program
- The judgment matrix compares the descriptive matrix to appropriate benchmarks for the activities being evaluated (usually from a national accrediting body)
- The final evaluation is an interpretation by the evaluator of discrepancies between the two matrices, and assigning an overall rating of merit
Fourth generation models were developed in the late 1970’s. The evaluation focus included stakeholder concerns and values.
H. Cervero Behavioral Change Model (1985)
- Four factor model that seeks to determine whether a change in performance will occur as a result of an educationalprogram
- Individual professional – while many characteristics can influence learning, this model focuses on one key characteristic: receptiveness to new ideas.
- Social system – recognizes the influence of peers and the system on the ability of participants to practice nursing according to what is taught.
- Proposed change – more change in practice occurs if participants rate themselves high on the likelihood of implementing the change (feasibility).
- CPE (continuing professional educational) program – variables in the design and implementation of the CPE program (quality) relate to program outcomes.
I. Roberta Straessle Abruzzese (RSA) Evaluation Model – (Abruzzese, 1992)
- Similar to Kirkpatrick’s Model, but adds a fifth step
- Process – happiness with the learning experience (did the participant like it?) [formative evaluation]
- Content – change in knowledge, affect, or skill on completion of learning (did the participant know it after education?) [formative evaluation]
- Outcome – changes in practice on a clinical unit after learning (did the participants do it on their unit?) [summative evaluation]
- Impact evaluation – organizational results attributable in part to learning (So
What?) [summative evaluation]
- Total program evaluation – congruence of program goals and accomplishments
J. Pyramid Model (Hawkins & Sherwood, 1999)
- Based on the work of Donabedian and Rossi & Freeman
- Hierarchical, but evaluation can occur concurrently on different levels
- All levels are viewed as an integrative whole and building on previous work
- Five components from bottom to top
- Examination of the goals of the program
- Review of program design and reflection upon key concepts
- Monitoring of program implementation
- Assessment of outcomes and impact
- Efficiency analysis
K. Accreditation Evaluation Model