From aimless action to professional development!

Introduction to evaluation of preventive work performed by practicians and consultants

Prepared by the working group under The Alcohol & Drug consultant A/N Forum

April 2000

List of contents:

Introduction to evaluation

Foreword

What is evaluation

Process evaluation

Outcome evaluation

From aimless action to professional development

Possible barriers

Summary of the EMCDDA Guidelines

Project planning

Process evaluation

Outcome evaluation

Communication

Production Tlf. 7610 9410

Foreword

In April 1998, the EMCDDA published: "Guidelines for the Evaluation of Drug Prevention". The aim of this publication was to promote the use of evaluation within the drug area. The EMCDDA guidelines provide useful suggestions on how to structure evaluations. However, they are not particularly reader or user-friendly.

This introduction is an edited summary of the EMCDDA guidelines and an introduction to evaluation as such. The purpose is to accelerate a process in order to ensure that an additional number of practicians have their actions evaluated. The guidelines are not only relevant to the drug area, but to the entire area of prevention - in other words, our daily practise, and hence our professional development - Why are we doing the things we do?

What is evaluation?

Evaluation is a variety of issues, and it is something that we do on a daily basis without giving it too much thought. Basically, evaluation means an appraisal of the value of one's actions. We do this semi-automatically to ourselves throughout the day (was this conversation ok? was our meeting fruitful? did we manage to do what we had set out to do?) In many cases, we use simple tools to systematize our own evaluation, eg checking our to-do-lists or job lists.

If there are several people to make a joint evaluation, it is easy to start discussing the criteria for the evaluation (For instance A " this was a good meeting!" B "why do you think so?" A": the atmosphere was good, and we succeeded in discussing all the items on the agenda" B: It is my impression that we failed to make any explicit agreements on who does what after this meeting") A and B evaluate the meeting from different perspectives: A is concerned with a worthwhile process, while B is more preoccupied with the product of the meeting. Subsequently, A and B might finally reach a joint perception of what characterizes "a good meeting" and how to achieve it. In this case, they would have to identify essential elements of the process and significant characteristics related to the product.

One last problem relates to the question of how to measure whether a criterion has been fulfilled or not. (For instance: C:"I don't find that the atmosphere was good; only 2 people took part in the debate, the other 5 were very subdued. A": but it is my impression that they concurred with what was said and were satisfied with what was said!" ) A and C have different perceptions of the atmosphere at the meeting, so perhaps they will agree that future meetings should be concluded with a round on whether the participants found the meeting satisfactory (eg data collection and methods of measure). The day-to-day evaluation employs (more or less explicitly) the elements that belong to any kind of professional evaluation.

Evaluation is defined as a systematic collection, analysis and interpretation of information about the progress and possible effects of actions. The interpretation or the evaluation of data is based on the purpose of the action and in accordance with predefined criteria.

Evaluation can be made on the basis of different professional approaches (eg organisational, administrative, financial or health scientific evaluation. Irrespective of the professional approach, there are two fundamental types of evaluation:

_Process evaluation (also known as formative, progress and development evaluation)

_Outcome evaluation (also known as impact evaluation),

Ideally, a useful evaluation contains elements of both types.

Process evaluation

Is a measurement of the quality of the action and a precondition for knowing which parts of the action that furthered or hampered the intention. Process evaluation may also be necessary in order to know if one's actions are actually intended. This element in the process evaluation would naturally be a measurement of the immediate reactions displayed by the target group.

Outcome evaluation

Is a measurement of the impact of the action and is necessary in order to know whether the action provided some of the desired effects at all. Result evaluation is far more complex than the process evaluation and would ideally require a major evaluation project applying a scientific approach.

From confused action to professional development

Evaluation can be made at many levels of ambition, and it is necessary to be realistic and realise that a moderate, limited evaluation (conducted in a simple, however well-defined manner) is often better than no evaluation at all, and is always better than an ambitious, mis-managed evaluation. The research-related evaluation requires research expertise, but it is also possible to gain material knowledge from less ambitious evaluation.

A systematic project description and a concise definition of goal are necessary prerequisites for carrying out useful evaluation work.

As it was laid down in the introduction, evaluation is incorporated as a natural element in our daily routines (often orally, unsystematically and non-communicated).

A systematic evaluation may lead to:

-Developing the professional work.

- Documenting goals and results and thus enhancing the professional area.

-Debriefing others.

-Securing professional exchange and a joint course, perhaps even inspiration when cooperating with others.

-Obtaining a tool for the benchmarking of methods.

- Maintaining experience within the organisation.

­Improving planning.

Potential barriers

What is remarkable is that only relatively few actions are evaluated. It is possible that there are some barriers, eg. evaluations are considered to be too expensive and difficult; that it takes time to evaluate, time that could have been spent on other actions. Finally, there may be the concern that evaluation is futile or that evaluation is misused by the media, politicians or the local management. These barriers should be torn down. It is no more important to carry on an action if it serves the wrong purpose than to spend time on a proper evaluation which can be used in a constructive manner for learning and adjustment of the action. Evaluation, however, always requires something: time, deliberation and knowledge. Consequently, one should also take into account:

­Whether the resources required for evaluation are available.

­Whether the action in question is of a size rendering evaluation relevant.

­Whether it is possible to use evaluation results for implementing changes.

In other words, whether in the given situation it is worth the effort.

Evaluation as a continuum:

Among drug prevention consultants there has been an increasing recognition during the past years that evaluation forms the basis of developing our professional scope:

Aimless actionActionless aim

(Sheer practice)(Sheer theory)

Placing of consultant

The aim of this introduction is to provide a tool to support the development which is already in force in many places.

Sources:

Kroger C, Winther H Shaw R. “Guidelines for the Evaluation of Drug Prevention”. EMCDDA 1998

Mehlbye J, Rieper O, Togeby M. Håndbog i evaluering. AKF 1993

Check list

EMCDDA’s guidelines for the evaluation of drug preventive

1: Project planning:

Prior to each project, there is a planning phase which often also forms the basis of grants and applications.

During the planning phase, the following points must as a minimum be analysed and described.

What is the problem?

How can the problem be explained?

Why is the action necessary?

Who is the target group?

What is the aim of the action?

What is the action?

Which resources are necessary?

How is the action evaluated?

Who needs to be involved in the planning and execution?

During the planning phase it should also be decided whether to use process and/or result evaluation and who will be performing it. It is important to make decisions on evaluation prior to the initiation of a project, since this may have a major impact on the process and the time schedule.

2: Process evaluation:

This includes the progress of the project, the reactions of the target group and whether the planned action has reached out to the identified target group. Process evaluation judges the quality of the action and is performed during the action.

How should the process be evaluated (planning)

- Which indicators (measurable units) need to be applied to evaluate the process?

- Which methods need to be used (questionnaires, interviews, observations, etc.)?

- How and during which periods are data collected and by whom?

What was the actual course of the project (compared to plan)

- How was the course of the actual action?

- How was the actual action measured?

.- Which resources were actually applied?

Action target group:

- How many and who were subjected to the action?

- How many and who were reached?

- To what extent was the target group reached in relation to plan?

Success of action:

- How successful was the action (own evaluation)?

- Immediate reactions of the target group and their attitude towards the action.

Data collection

- How and when were the data collected?

- What measurable data were actually applied?

- To what extent was the target group reached in relation to plan.

All questions should be compared to the targets set out for the planning phase (point 1) and the strengths and weaknesses of the action subsequently evaluated.

Conclusions are then drawn for possible future actions.

3: Outcome evaluation

Result evaluation is a judgement of whether the project achieved the planned impact.

How should the results of the action be measured (planning):

- Which indicators should be measured?

- Which methods should be used?

- How, when and by whom should data collection be made?

- Who should interpret data and how?

How was the action perceived in comparison with project goals:

- How did the target group profit from the action?

- Did the action have other (non-planned) effects on the target group?

- Has the action affected other target groups than those planned; if so, how?

Action goals:

- Were projected results reached?

- Has anything else than the planned action had an impact on the result?

- How are deviations from the planned activities explained?

- How are negative results, if any, explained?

Putting into perspective

- Suggestions for future actions

4: Communication:

When considering communication, the following questions should be answered:

- Who may benefit from the results (perhaps also outside the group of traditional cooperation partners)?

- When will they need the information?

- What information will be relevant?

- Which means of communication will be applied most appropriately?

1