Results and Monitoring Framework

Building Stronger Universities II

Draft, 09 May 2014 – Nils Boesen, BSU II Monitoring Consultant

1.Introduction and purpose

The purpose of this paper is to facilitate a consistent approach to the specification of results and indicators across the BSUII partnerships. This specification will form the basis for the subsequent monitoring and reporting as the partnerships unfold.

The paper builds on literature about performance management and measurement of research-based tertiary education institutions. For such institutions worldwide there has over the last two-three decades been an increasing focus on measurement of performance at multiple levels and using multiple parameters. This includes international or regional rankings; use of key performance indicators as management and budgeting tools; and making data about performance available to the public. To the degree that Southern universities are also applying such methods this paper – and, more importantly, the monitoring efforts required under the BSUII programme – aims at aligning to the efforts of the universities and strengthen their internal performance management if and when relevant and feasible.

The paper also builds on available Danida guidance on capacity development, results-based management and monitoring, as well as on international literature on research capacity development and monitoring of research oriented international cooperation.

The paper has the following sections:

  • In section 2, key terms and concepts are presented
  • Section 3 presents the generic results framework for research capacity development that the monitoring will adhere to.
  • Section 4 specifies the broad categories within which the partnerships should identify results and indicators, and how these will be aggregated at programme level
  • Section 6 specifies the monitoring process

2.Key Terms and Concepts

Results and results chains

In accordance with objective-oriented planning methods, in particular the Logical Framework Approach (LFA), the partnerships will identify results at different levels, in (logical) results-chains that represents expected causal connection between means and ends. For example, training can be expected to lead to the trainees having new skills. The trainees may apply these skills in the workplace, leading to new or enhanced products/services (or, in one word, performance) of the unit. This again may have direct or indirect effects for the users of these services/products – leading to wider impact on e.g. health, growth, poverty or other relevant key societal parameters.

The desired wider impact of an intervention is often called the development objective, while the medium-term outcomes are called immediate objectives. Sometimes the terms of purpose, aim, goal, and vision are also used – all of these terms describe a future desired situation to which the interventions are expected to contribute.

As a simple example, health staff may be trained in a new way to approach citizens whose smoking habits are a risk to themselves and a cost to society. Applying the skills in their counselling may lead more smokers to quit the habit, with an ultimate effect on health parameters (and health system costs).

At the planning stage of an intervention, a results-chain is a hypothesis only – an expectation that by conducting certain activities, there will be a ripple-effect ultimately resulting in the desired impact. The hypothesis may be wrong or right (or both partially wrong and right…) – and monitoring is the instrument to verify if the hypothesis was indeed reasonable.

Box 1: Key terms - Results, Impact, Outcome, Output, Activities

The term “result” is used to describe either the output, outcome or impact (intended or unintended, positive and/or negative) of adevelopment intervention, or, in the case of BSUII, of the partnership.
“Impact” is used to describe thepositive or negative, primary and secondary long-term effects produced by the partnership, directly or indirectly, intended or unintended.
“Outcome” is the likely or achieved short-term and medium-term effects of the outputs of the partnership.
The “outputs” are the immediate and short-term effects of the activities of the partnership. Do note, however, that outputs can be at different level, and they are in most cases not the same as “an activity concluded”. Running a training course is an activity – but the output is not that the training course has been implemented (or that 20 persons participated). That is not an effect of running the training course, but just a statement that something has been done. The immediate effect would typically be that participants have acquired new skills or competencies (often measured through test or exams), while an effect a little further down the road would be that these competencies are actually used in the workplace. This latter output may well demand other activities to be conducted (e.g. new guidelines, management approach or other).
Finally, the “activities” are theactions taken or work performed through which inputs, such as funds, knowledge exchange, joint collaboration and othertypes of resources are mobilised to produce specific outputs[1].
The project outlines prepared by the Southern Universities already uses this type of results-chain thinking. The inception phase will further specify the results-chains.

The purpose(s) of monitoring

In the context of monitoring Danida-funded programmes and projects, the primary aim is to makerelevant information for decision-making and learning available. This takes place at three levels: day-to-daymanagement for immediate feedback into decision-making on practical implementation;the Steering Committee or similar overall management body for regular stock-taking and strategic decisionmaking;and the higher echelons of management both in the University and in the Danish Ministry ofForeign Affairs for accountability and control purposes.

Box 2: Key terms - Monitoring, Indicators, Targets, Baseline

Monitoring: A continuing function that uses systematic collection of data on specified indicators toprovide management and the main stakeholders of the partnerships withindications of the extent of progress and achievement of objectives and progress in the use ofallocated funds.
Indicator: Quantitative or qualitative factor or variable that provides a simple and reliable means tomeasure achievement, to reflect the changes associated with the intervention, or to help assess theperformance of the partnership.
Target: A target signifies the value that an indicator is supposed to attain at a given point in time.
Baseline: A baseline value of an indicator is the value at the start of the intervention. Without a baseline and a target it is difficult to assess progress.
Importantly, indicators can be at any result-level: They can indicate impact, outcome or outputs.

3.Results Framework for Research Capacity Development

This section details an analytical model – or a results framework – for Research Capacity Development[2]. The detailed substantial elements may well be replaced with others, which partners may find more relevant, but the underlying logical result chain about capacity development should be kept as the point of reference.

The figure on the next page illustrates the logical chain. Starting from the left the argument (theory of change) goes that:

  1. Inputs (knowledge, time, funds, other resources) from Southern and Danish partners with good cooperative relations – will be used to conduct:
  2. Capacity development activities (exchange or training workshops, joint preparation of research proposals, development of new PhD courses – etc. etc.). These are all processes which will lead to outputsin the form of:
  3. Enhanced research capacity (primarily, but not only in South) in the form of e.g. better research based curricula; faculty trained in new approaches to teaching, supervision and research; research quality assurance policies in place; extended networks; as well as policies governing

1

1

how to reach out to external stakeholders. This would be underpinned by reasonably appropriate incentives to staff; enhanced financial management; effective management and governance; better laboratory and library facilities etc. As a (most likely mid-term) result or outcome, this will lead to:

  1. Enhanced research performance of the university. This would include actually delivering better (relevant, efficient, effective) services to the costumers of the university: to students (better, research based teaching, better career coaching etc.); to other scholars (quality research reaching research publications, conferences etc); and to society (targeted and purposeful interaction with communities, economic agents and governments helping to solve specific problems or bringing evidence to policy making, etc). This would lead to outcomes where
  2. Users benefit from the strengthened research performance of the university; that is, students get relevant jobs and are able to perform drawing on their scientific training; research from the university echoes well in relevant academic environments; and research results are taken up by stakeholders, finally leading to
  3. Societal impact, likely to materialize more substantially only over decades: Better health, more productive agriculture, more competitive private sector, better environmental management, more effective government policies. This whole hypothesis of means and ends from 1 to 6 further depends on:
  4. Context factors and relations. These can be influenced (e.g. fighting for budgets from central government) but are also very influential on their own (e.g. economic downturn will hurt employment prospects of candidates, and funding) – they represent risk and opportunities that bth demand monitoring and proactive actions.

For the purpose of the monitoring of the BSUII partnerships, focus will initially be on levels 1-4, and on particular risks in the context of BSUII (7). Specific results and indicators with baselines and targets will have to be developed for level 3 (capacity) and 4 (performance). Monitoring of activities and inputs will also be required (comparing actual activities to planned ones, and expenditures to budgets), as well as the evolving quality of the partnership relations.

In the description above three areas of effects of research capacity are embedded that often appear in literature[3]:

  1. The effects of having research-based teaching. Outcomes include e.g. having more and better qualified graduates relevant to the needs of the country/region; with a wider impact on growth, employment and productivity.
  2. The effects of research and scholarship. Research producing new knowledge is often considered an aim in itself. It will results in publications; citations; attraction of research funding; awards and recognition. At a higher impact level, it will contribute to the collective ongoing knowledge accumulation and scientific progress of humankind.
  3. The effects of public service research and outreach and the subsequent uptake of research. The interactions with social groups, private sector and government around research priorities as well as in advisory functions and action-research addressing specific problems will result in uptake of the research at community level, by private sector actors, non-governmental organisations involved in advocacy, or at policy level. This level of “applied research” is an outcome area of particular interest to Danida (and other donors) because the shorter-term benefits to local development processes may well be most likely to materialize in this area

The individual partnerships may have different weight on achieving effects in each of these three areas, and they may aim at results, which are further or less further down the results-chain.All partnerships are, however, invited to think through their relative focus on each of these areas, as well as their specific theory of change that eventually will produce the desired outcomes and impact, not for the university only, but for world around it.

Annex 1 includes a table that seek to specify possible results and indicators for the three focus areas mentioned above as well as for the most pertinent results-levels.

4.Partnership and Programme level capacity and performance results and indicators

Partnership level results, indicators and baselines

Building on the framework outlined above the partnerships should develop results and corresponding indicators primarily for the research capacity and the university performance. In doing so, the partnerships should build on already existing results- and key performance indicators used by the Southern university. Table 1 below indicates the fields of capacity and performance within which results and indicators should be identified, with more details available in Annex 1. Individual partnerships are of course invited to add to what is outlined in this paper, and they are not expected to identify specific results and indicators in all the areas listed.

Do note that the table does not specify when and how to e.g. add gender specificity to the indicator areas, this would be a crucial task during the inception phase. Nor does it specify indicators for the risk and opportunities (level 7 in the model outlined in section 3).

Table 1: Results- and indicator clusters for BSUII (level 3 and 4 only)

Research capacity areas / Research performance areas
“Backoffice” and support capacities:
  • Laboratory performance
  • Quality standards
  • Management/ maintenance
  • Usage
  • User satisfaction
/ Research based teaching:
  • New/revised courses, modules or programmes in use
  • Number of students exposed to research-based education
  • Ratio of research-qualified teachers to students
/ Research based teaching:
  • Student satisfaction
  • Graduation numbers, times and cost per student
  • Research relevant learning outcomes
  • Employment rate in jobs/business where research competencies are relevant

  • Library performance
  • Usage
  • User satisfaction
  • Expenditure/student
/ Research:
  • Submissions of publications
  • Submission of research proposals
  • Participation in research networks
  • Research staff numberand experience
/ Research:
  • Publications, citations, presentations at conferences
  • Successful proposals as proportion of all submissions
  • Awards/recognition
  • Appropriate retention and renewal of research staff

  • Financial and financial management performance
  • Other research administration
  • Workplace satisfaction
/ Outreach:
  • General dissemination and communication activities about research
  • Regular consultations with stakeholders
  • Action research activities
  • Community outreach
/ Outreach:
  • Stakeholder satisfaction
  • Perceived relevance and quality of research
  • Selection for advisory boards, committees etc.
  • Specific contributions to practical solution of development problems

As noted, baseline values will have to be identified for each indicator. The baseline value should in general not be a zero even if the indicator mentions e.g. new or revised courses. The new and revised courses should be measured against the total existing number of courses in the particular area, to enable a clear sense of progress compared to the actual situation. Adding or revising one of, say, 50 courses, is obviously a different achievement than adding one new course to a stock of five courses.

Programme level results and indicators

Across the partnerships, a limited number of indicators across the indicated areas will be identified that can meaningfully be aggregated as results and indicators of the BSUII programme. The intention is to identify indicators that all or most partnerships can report on without or with minor modifications only of the indicators identified by each partnership.

The programme level indicators will be developed in tandem with the partnership indicators. They will reflect the tentative results and indicators outlined in the BSUII programme document. To the degree that it may be difficult to synthesize across specific indicators.

5.The Monitoring process

The monitoring process in the BSUII partnerships consist of:

  • A monitoring workshop (this paper is a key input to the workshop)
  • Identification of specific results and indicators for phase II
  • Establishment of baseline values and targets of the selected indicators
  • Approval of the inception report where the results, indicators, targets and baseline values will appear.

The monitoring consultant employed by Danida will assist as required during this part of the process which will take place during the 3-months inception phase.

In the implementation phase, the following tasks pertain to the partnership lead (South University) and as relevant the Danish partners:

  • Regular monitoring by the partnership management, with resulting decisions on adjustment of workplans and timetables.
  • Reporting on the development of indicator values as compared to the targets and baselines as per the schedule stipulated in the Trilateral Agreements which will be signed at the end of the inception phase.
  • Development of proposal for adjustment of results, indicators and target values as per the lessons learned through the monitoring. Such adjustments will be subject to a dialogue with DFC and may have implications for budget allocations and schedules.

Mid-term in the implementation phase, Danida will conduct a review of the progress of the partnerships. This review will take stock of advances, constraints and risks experienced during the implementation. The review will form the basis for the dialogue about a possible third phase of the BSU programme.

A final remark on monitoring, results and indicators

This paper aims at balancing performance ambitions with learning and accountability purposes. Often, identification of results, indicators and targets can be a dull, nearly mechanical exercise, which ends up as risk avoidance behavior: identifying what can with near certainty be delivered – say, “20 people trained in scientific writing” (which is really not a result at all, but a statement about a completed activity) – means that success is nearly guaranteed at the outset. But that is the success of mediocrity and low ambition. The results and indicator discussion, frankly conducted, is a unique chance to identify a level of real ambition – with the inherent real risk of failing to reach the targets.