in a nutshell -- ASSESSMENT and evaluation

Both Assessment and Evaluation involve data gathering – sometimes the same surveys or interview questions. Distinctions between assessment and evaluation involve timing (assessment usually precedes implementation, evaluation follows implementation), purpose (assessment monitors trends over time, evaluation monitors outcomes of a program), and use of results (assessment guides planning, evaluation guides continuous improvement of programming). Assessment usually precedes a project, while Evaluation follows it. Both should be Sustainable over time and Culturally Appropriate.

Assessment: Process of identifying the needs of a target population and gaps in services to meet those needs. Needs assessments involve collecting secondary data (available from other sources) or generating primary data.

  • Types of assessment include:
  • Organizational needs and resources assessment of aspects of the primary prevention organization including human resources, technical resources, and infrastructure.
  • Community needs and resources assessment of characteristics of the wider community including prevention resources and infrastructure (e.g., call centers, trained counselors, programs, policies), partnerships within the community, substance use rates and consequences (e.g., alcohol-related mortality), factors that might cause, lead to, or promote substance use (e.g., risk and protective factors), and community readiness.
  • Needs assessment seek to answer the following questions:
  • What is the problem? How do we know there is a problem? What type of information shows there is a problem? From what data sources does this information come from?
  • What can be done to address this problem effectively? What resources and infrastructure are needed to address this problem?
  • What resources and infrastructure do we already have to address the problem? What needed resources and infrastructure are lacking to address the problem?
  • Are the identifiedresources and infrastructure designed for sustainability? Are the identifiedresourcesand infrastructure culturally appropriate?
  • IPRC Local Assessment Group
  • IPRC staff dedicated to assessment activities of grantees includes Eric Martin, Roger Cavazos, and Katharine Sadler.
  • Services include one-on-one onsite and telephone consultations, development of data collection tools, provision of county-level data, and review of epidemiological report and strategic plan drafts.

IMPLEMENTATION OF PROGRAM, STRATEGY, OR POLICY

pROGRAM Evaluation: Determining the value or worth of a program.

  • Types of evaluation include:Process and Outcome.
  • Process evaluationmonitors the implementation of a program, strategy or policy. This could include lessons of a curriculum delivered to high school students, number and location of billboards placed in a community, or date and level of enforcement of a ban on methamphetamine precursor drugs.
  • Outcome evaluation determines achievement of objectives related to perceptions, attitudes, and behaviors. This could include perceived peer approval of drug use, level of risk/harm associated with use, or use in the past 30 days.
  • Process is important for determining whether outcomes are related to components of the program, strategy or policy. For example, if the location of prevention billboards is not known, it is impossible to attribute decreases in social hosting within a particular neighborhood to this strategy. That is, without documenting the steps taken, it is difficult to know to what to attribute outcomes.
  • Evaluations seek the answer the following:
  • Did the program, strategy, or policy impact the problem?Did it impact related factors (e.g., risk and protective factors)? How do we know?What measures are we using to determine whether there was an impact?
  • How was change or lack of change measured (e.g., compared to a pre-test or state rates)?Do they collect data from individuals (e.g., surveys) or aggregate groups (e.g., focus groups)?
  • Were anticipated outcomes achieved? Why or why not? Was the process followed? Were identified resources and infrastructure obtainedtoaddress the problem?Was the program implemented with fidelity (e.g., all lessons of a curriculum delivered as the developer intended)?
  • Was the evaluation process culturally appropriate (e.g., cultural group included in interpretation)?Can the evaluation process be maintained over time? Why and why not?
  • Levels of Evaluation
  • National (or Cross-Site) Evaluation –Westat, Pacific Institute for Research and Evaluation (PIRE), and Mayatech are responsible for collection of Community Level Instrument data and Fidelity Assessment.
  • State Evaluation - Indiana University Purdue University-IndianapolisCenter for Health Policy is responsible for collection of National Outcome Measures (NOMs) at the state level.
  • Program Evaluation – IPRC evaluation team (Dr.Jeanie Alter, Dr. Randy Zafutto, and Marcia Dias) is responsible for facilitation of the completion of CLI, collection of NOMs and process evaluation data, and evaluation plan development.

.

-Developed by the IndianaPreventionResourceCenter-