CARDS 2003 program: Decentralization and reorganization of the Croatian Employment Service (CES)

Proposal for a Performance Monitoring System

Evangelos Bountalis

March, 2007

TABLE OF CONTENTS

  1. INTRODUCTION3
  2. Monotoring using aPerformance Measurement

Framework3

1.2Government Organizations and Monitoring Systems4

1.3Performance Monitoring Plan4

  1. MONITORING FRAMEWORK 10
  2. HOW TO CONDUCT MONITORING 12
  3. MONITORING PERFORMANCE 18
  4. USE MONITORING INFORMATION 23
  5. BIBLIOGRAPHY 27
  6. ANNEX I : GLOSSARY OF TERMS 28
  7. ANNEX II: RECOMMENDATIONS ON

PERFORMANCE MONITORING IN PUBLIC SERVICES 38

9.ANNEX III: DATA COLLECTION MATRIX 40

10.ANNEX IV: THE ROLES OF P.M. 41

1. INTRODUCTION

1.1.Monitoring using a performance measurement framework

A striking feature of European Public Services in the 1990s was the rise of Performance Monitoring, which records, analyzes and publishes data in order to give the public a better idea of how government policies changethe public services and to improve their effectiveness. The Performance Monitoring was seen as an integral part of a Performance Management System,that is a systematic approach to performance improvement through an ongoing process of:

Establishing strategic performance objectives

Measuring and monitoring perfrormance

Collecting, analyzing, reviewing and reporting performance data

Using the data to drive performance improvement.

In the following diagram the structural relationship between the Performance Measurement Concepts is presented:

Contemporary Government Organizations need to know how limited resources are being well utilized. Data can be used to assess and improve an organisation’s performance, service delivery and reporting.

The current deliverable is related to activity 4 “Performance Management” and more precisely activity 4.6 “Support the production of performance monitoring reports” and aims at proposing a Monitoring Performance System to be utilized by the Monitoring Team (Change Agents) within the wider context of a Performance Management System in order to assess the achievement of CES organizational objectives and goals.

In the following parts the basic ideas and composing concepts of a modern and suitable for the Croatian Employment Service Monitoring System are introduced.

1.2Government organisations and monitoring systems

Government organisations frequently report difficulty with tracking or monitoring systems. Government agencies need to recognise that not all community groups have access to ICT and should design systems and allocate funding accordingly.

Benefits of a good system
  • Good systems provide accurate and up-to-date feedback on what is working well or not so well.
  • Communication is improved in all parts of the organization
  • A successful performance measurement system can become the basis of performance monitoring systems across the organisation.
  • The performance monitoring environment becomes more responsive and adaptable.
  • Good monitoring systems can trigger discussions on issues such as values and mission.

1.3Perfrormance Monitoring Plan[1]

A performance monitoring plan (PMP) is a tool used by organizations toplan and manage the collection of performance data. Sometimes the plan alsoincludes plans for data analysis, reporting, and use.

At a minimum, PMPs should include:

a detailed definition of each performance indicator

the source, method, frequency and schedule of data collection

the office, team, or individual responsible for ensuring data areavailable on schedule

As part of the PMP process, the operating units should plan for:

how the performance data will be analyzed, and

how it will be reported, reviewed, and used to inform decisions

PMPs should be updated as needed to ensure plans, schedules,and assignments remain current.

Benefits of PMPs

  1. A performance monitoring plan is a critical tool for :

planning,

managing, and

documenting data collection.

2.It contributes to the effectiveness of theperformance monitoring system by assuring that comparable data will becollected on a regular and timely basis.

These are essential to the operation of acredible and useful performance-based management approach.

3.PMPs promote the collection of comparable data by sufficiently documentingindicator definitions, sources, and methods of data collection. This enablesoperating units to collect comparable data over time even when key personnelchange.

4.PMPs support timely collection of data by documenting the frequency andschedule of data collection as well as by assigning responsibilities.

5.Operatingunits should also consider developing plans for data analysis, reporting, andreview efforts as part of the PMP process.

It makes sense tothink that data collection, analysis, reporting, andreview are parts of an integrated process. This will help keep theperformance monitoring system on track and ensureperformance data informs decision-making.

Elements of a PMP

The following elements should be considered forinclusion in a performance monitoring plan.

I. Plans for Data Collection

In its strategic plan, an operating unit will have identifieda few preliminary performance indicators for each of itsstrategic objectives, strategic support objectives, andspecial objectives (referred to below simply as SOs), andintermediate results (IRs). In mostcases, preliminary baselines and targets will also havebeen provided in the strategic plan. The PMP builds onthis initial information, verifying or modifying theperformance indicators, baselines and targets, anddocumenting decisions.

PMPs are required to include information outlined belowon each performance indicator that hasbeen identified in the Strategic Plan for SOs and IRs.

Plans should also address how critical assumptions andresults will be monitored,although the same standards and requirements fordeveloping indicators and collecting data do not apply.

Furthermore, it is useful to include in the PMP lowerlevelindicators of inputs, outputs, and processes at theactivity level, and how they will be monitored.

1. Performance Indicators and Their Definitions

Each performance indicator needs a detailed definition.

As an illustration, consider the indicator,number of small enterprises receiving loans from theprivate banking system. How are small enterprisesdefined -- all enterprises with 20 or fewer employees, or50 or 100? What types of institutions are considered partof the private banking sector -- credit unions,government-private sector joint-venture financialinstitutions?

Include in the definition the unit of measurement. Forexample, an indicator on the value of exports might beotherwise well defined, but it is also important to knowwhether the value will be measured in current or constantterms and in U.S. dollars or local currency.

The definition should be detailed enough to ensure thatdifferent people at different times, given the task ofcollecting data for a given indicator, would collectidentical types of data.

2. Data Source

Identify the data source for each performance indicator.

The source is the entity from which the data are obtained,usually the organization that conducts the data collectioneffort. Data sources may include governmentdepartments, private firms, contractors, oractivity implementing agencies.

Be as specific about the source as possible, so the samesource can be used routinely. Switching data sources forthe same indicator over time can lead to inconsistenciesand misinterpretations and should be avoided. Forexample, switching from estimates of infant mortalityrates based on national sample surveys to estimates basedon hospital registration statistics can lead to falseimpressions of change.

3. Method of Data Collection

Specify the method or approach to data collection foreach indicator. Note whether it is primary data collectionor is based on existing secondary data.

For primary data collection, consider:

the unit of analysis (individuals, families,communities, clinics, wells)

data disaggregation needs (by gender, age, ethnicgroups, location)

sampling techniques for selecting cases (randomsampling, purposive sampling)

techniques or instruments for acquiring data onthese selected cases (structured questionnaires,direct observation forms, scales to weigh infants)

For indicators based on secondary data, give the methodof calculating the specific indicator data point and thesources of data.

Note issues of data quality and reliability. For example,using secondary data from existing sources cuts costs andefforts, but its quality may not be as reliable.

Provide sufficient detail on the data collection orcalculation method to enable it to be replicated.

4. Frequency and Schedule of Data Collection

Performance monitoring systems must gathercomparable data periodically to measure progress. Butdepending on the performance indicator, it may makesense to collect data on a quarterly, annual, or lessfrequent basis.

For example, because of the expense andbecause changes are slow, fertility rate data from samplesurveys may only be collected every few years whereasdata on contraceptive distributions and sales from clinics'record systems may be gathered quarterly.

When planning the frequency and scheduling of datacollection, an important factor to consider ismanagement's needs for timely information for decisionmaking.

5. Responsibilities for Acquiring Data

For each performance indicator, the responsibility theoperating unit for the timely acquisition of data fromtheir source should be clearly assigned to a particularoffice, team, or individual.

II. Plans for Data Analysis, Reporting,Review and Use

An effective performance monitoring system needs toplan not only for the collection of data, but also for dataanalysis, reporting, review, and use. It may not bepossible to include everything in one document at onetime, but units should take the time early on for carefulplanning of all these aspects in an integrated fashion.

6. Data Analysis Plans

To the extent possible, plan in advance how performancedata for individual indicators or groups of relatedindicators will be analyzed.

Identify data analysistechniques and data presentation formats to be used.

Consider if and how the following aspects of dataanalysis will be undertaken:

Comparing disaggregated data. For indicators withdisaggregated data, plan how it will be compared,displayed, and analyzed.

Comparing current performance against multiplecriteria. For each indicator, plan how actual performancedata will be compared with :

  • past performance,
  • planned or targeted performance or
  • other relevant benchmarks.

Analyzing relationships among performance indicators.Plan how internal analyses of the performance data willexamine interrelationships. For example :

  • How will a set of indicators (if there are morethan one) for a particular SO or IR be analyzedto reveal progress
  • What if only some of theindicators reveal progress
  • How will cause-effect relationships among SOsand IRs within a results framework be analyzed
  • How will organization’s activities be linked toachieving IRs and SOs?

Analyzing cost-effectiveness. When practical andfeasible, plan for using performance data to comparesystematically alternative program approaches in termsof costs as well as results.

7. Plans for Complementary Evaluations

Evaluations should beconducted only if there is a clear management need. Itmay not always be possible or desirable to predict yearsin advance when or why they will be needed.

Nevertheless, operating units may find it useful to planon a regular basis what evaluation efforts are needed tocomplement information from the performancemonitoring system. The operating unit's internalperformance reviews, to be held periodically during theyear, may be a good time for such evaluation planning.

For example, if the reviews reveal that certainperformance targets are not being met, and if the reasonswhy are unclear, then planning evaluations to investigatewhy would be in order.

8. Plans for Communicating and Using PerformanceInformation

Planning how performance information will be reported,reviewed, and used is critical for effective managing forresults.

For example, plan, schedule, and assignresponsibilities for internal and external reviews,briefings, and reports. Clarify what, how and whenmanagement decisions will consider performanceinformation.

Specifically, plan for the following:

Operating unit performance reviews.Operating units should conduct internalreviews of performance information at regular intervalsduring the year to assess progress toward achieving SOsand IRs. In addition, activity-level reviews should beplanned regularly by SO teams to assess if activities'inputs, outputs, and processes are supportingachievement of IRs and SOs.

External reviews, reports, and briefings. Plan forreporting and disseminating performance information tokey external audiences, such as other partners, donors,customer groups, and stakeholders. Communicationtechniques may include reports, oral briefings,videotapes, memos, newspaper articles.

Influencing management decisions.The ultimate aim ofperformance monitoring systems is to promote performance-based decision-making. To the extentpossible, plan in advance what management decisionmakingprocesses should be influenced by performanceinformation. For example, budget discussions,programming decisions, evaluation designs/scopes ofwork, office retreats, management contracts, andpersonnel appraisals often benefit from the considerationof performance information.

9. Budget

Estimate roughly the costs to the operating unit ofcollecting, analyzing, and reporting performance data fora specific indicator (or set of related indicators). Identifythe source of funds.

If adequate data are already available from secondarysources, costs may be minimal. If primary data must becollected at the operating unit's expense, costs can varydepending on scope, method, and frequency of datacollection. Sample surveys may cost more than €100,000, whereas rapid appraisal methods can beconducted for much less.

However, often these low-costmethods do not provide quantitative data that aresufficiently reliable or representative.spend on performance monitoring and evaluation.

2.MONITORING FRAMEWORK

  1. Purposes of Monitoring

This part of the deliverable highlights the main purposes of monitoring, explains howthis function is of use to the organization and provides definitions ofmonitoring.

Monitoring enhance the effectiveness of an organization by establishingclear links between past, present and future interventions and results. Monitoring can help an organization to extract, from past and ongoing activities, relevantinformation that can subsequently be used as the basis for programmatic fine-tuning,reorientation and planning.Without monitoring it would be impossible tojudge if work was going in the right direction, whether progress and success could beclaimed, and how future efforts might be improved.

Monitoringhelp improve performance and achieve results. Moreprecisely, the overall purpose of monitoring is the measurementand assessment of performance in order to more effectively manage theoutcomes and outputs known as development results.

Performance is defined asprogress towards and achievement of results.

Traditionally, monitoring focused on assessing inputs and implementationprocesses.

Today, the focus is on assessing the contributions of various factorsto a given development outcome, with such factors including outputs, partnerships,policy advice and dialogue, advocacy and brokering/coordination. The main objectives of today’s results-oriented monitoring are:

  • To enhance organizational and development learning.
  • To ensure informed decision-making.
  • To support substantive accountability.
  • To build organizational capacity in each of these areas, and in monitoring function in general.

These objectives are linked together in acontinuous process.

Learning from the past contributes tomore informed decision-making. Betterdecisions lead to greater accountabilityto stakeholders. Better decisions alsoimprove performance, allowing for organizational activities to be repositioned continually.

Partnering closely with key stakeholdersthroughout this process also promotesshared knowledge creation and learning,helps transfer skills, and develops thecapacity of organization for planning, monitoring andevaluation. These stakeholders alsoprovide valuable feedback that canbe used to improve performance andlearning. In this way, good practices atthe heart of monitoring are continually reinforced, making a positivecontribution tothe overall effectiveness of development.

  1. Definitions of Monitoring

Monitoring can be defined as a continuing function that aims primarily to provide themanagement and main stakeholders of an ongoing intervention with early indicationsof progress, or lack thereof, in the achievement of results. An ongoing interventionmight be a project, programme or other kind of support to an outcome.

Reportingis an integral part of monitoring. Reporting is the systematicand timely provision of essential information at periodic intervals.

Monitoring takes place at two distinct but closely connected levels:

One level focuses on the outputs, which are the specific products and servicesthat emerge from processing inputs through programme, project and other activitiessuch as through ad hoc soft assistance delivered outside of projects and programmes.

The other level focuses on the outcomes of organizational development efforts, which arethe changes in development conditions that organization aims to achieve through itsprojects and programmes. Outcomes incorporate the production of outputs and thecontributions of partners.

The following figure illustrateshow outputs and outcomes inter-relate during the process of achieving results:

Two other terms frequently used in monitoring are defined below:

Feedback is a process within the framework of monitoring by which information and knowledge are disseminated and used to assess overall progresstowards results or confirm the achievement of results. Feedback may consist offindings, conclusions, recommendations and lessons from experience. It can be usedto improve performance and as a basis for decision-making and the promotion oflearning in an organization.

A lesson learned is an instructive example based on experience that is applicable to ageneral situation rather than to a specific circumstance. It is learning from experience.Lessons learned can reveal “good practices” thatsuggest how and why different strategies work in different situations—valuableinformation that needs to be documented.

3.How to ConductMonitoring

This part of the deliverable describes how to develop a comprehensive, logical planning framework for monitoringrelated to Organizational (CES) Programmes, the strategic results frameworkand other activities. It provides guidance on how to develop a monitoring plan. The objective is to help CES plan for monitoring actions in a coherent manner, depending on its needs and the intended results.

a.Key Principles for Planning

  1. Overall Working Planning

A work plan is an annual or multi-year summary of tasks, timeframes and responsibilities.

It is used as a monitoring tool to ensure the production of outputs andprogress towards outcomes.

Work plans describe the activities to be conducted as wellas the expected outputs and outcomes.

The overall process of workplanning is acomprehensive tool that helps people translate information or ideas into operationalterms on an annual basis.

A Monitoring work plan, contains three inter-related elements:

  • The overall work plan, which contains substantive information and managementactions and is overseen by country office management.
  • The monitoring and evaluation work plan, which is focused on outputs andoutcomes and overseen by programme staff.
  • The project work plan, which is focused on activities and outputs and overseenby project staff.

ii. Minimum Requirements