INTERNALISED EVALUATION: The Malaysian Experience
Background
The Modified Budgeting System (MBS) was implemented in Malaysia since 1990 explicitly to overcome various weaknesses that were faced with the Program Performance Budgeting System (PPBS) such as optimisation of resource allocation, improved program performance while increasing the level of accountability. It was based on philosophies of Results Based Management (RBM) using an Integrated Performance Framework to drive results. It was implemented on the basis of Let Managers Manage using a Top-Down planning methodology where managers were empowered to generate outputs as cost effectively as possible resulting in program effectiveness.
2.The MBS’s strong focus on results is quite clear when we see the approach and strategic components in the planning process under the MBS. The MBS requires considerable strategic planning and needs assessment before goals and objectives are set for each program and activity. Program managers are required to carry out detailed analysis and to use program logic in determining the client needs, program goals, objectives, outputs, and specific activities to be carried out towards achieving these objectives and pre-determined impact. The MBS therefore focuses attention on managing for results at almost all stages of the budgeting process, making it a cohesive managing-for-results budgeting system.
3.In 1999, the government had as a matter of policy decided that Program Evaluation that was to be conducted under the Modified Budgeting System (MBS) adopt the Internalised Evaluation (IE) Model. The IE model was adopted because evaluation has evolved into more then just an external event but rather one with emphasis on capacity building that contributes and leverages on knowledge to strengthen governance and organization. This is a shift form the original concept of looking at evaluation within the context of rate of returns in investments and economic and financial viability. Under the new paradigm, evaluation is being viewed as an ongoing and learning process that contributed to capacity building. The IE model has been adopted by a number of developing countries such as Japan, South Korea, France, United Kingdom and the United States in various forms. Evaluation generally tends to focus a great deal on methodology resulting in managers shying away from being involved in evaluation and seeing it as something that can only be carried out by a group of highly trained consultants. While we concur that methodology is important in evaluation the goal is to make it as simple as possible for managers at all levels so that they can use it effectively to make decisions.
Why the Internalised Evaluation Model?
Internal Evaluation (IE)[1] is a form of action research that supports organisational development and intentional change. Evaluation is seen as an integral part of the program implementation cycle. The focus under IE is as follows:
- Stakeholders and managers have ownership over the evaluation process.
- The goal is not to prove but rather to improve.
- Evaluation is ingrained in the day-to-day operations of an organization.
- Improved communication and better understanding between groups in a program.
- Discover new knowledge about effective practices contributing to innovation.
- Cost Effective in conducting it internally
- An important benefit of evaluation is organisational learning -- a way for the organisation to assess its progress and to change in ways that lead to greater effectiveness
Within the organizational context IE is a tool for managers to guide organisational management and decision making that contributes to organisational design and organisational learning. Therefore it is critical to integrate monitoring and evaluation into organisational life. Moreover, under MBS, which is driven by a common integrated framework, monitoring and evaluation is designed as an integral part of this framework. The success of IE will depend on these key factors.
- There must be top management support and commitment for evaluation
- Positive and capable leadership to understand and internalise the evaluation function
- Organisational culture supports continuous learning and critical program review for improved decision-making
- Highly visible public image of internal evaluation in the organisation to drive governance
The Malaysian IE Model
The Malaysian IE Model requires that individual program/activity managers undertake evaluation. This will require them to attain the requisite skills to undertake evaluation satisfactorily. The demand for program mangers to undertake evaluation internally stems from the fact that the program manager and his team (which constitutes officers from the program) will have the necessary institutional memory and program knowledge to make informed recommendations in relation to program results. It is therefore imperative that the methodology be simplified while at the same time maintaining a high level of integrity. The program and activity structure under which MBS operates is designed for lateral and vertical linkages and integration to achieve results. Based on this, IE uses peer group reviews[2] of the evaluation process to prevent biased reporting by program managers. Peer group reviews have been positive in adding value to the evaluation process as it clearly defines lines of accountability and the level of responsibility for results by each program manager.
Evaluation in the MBS environment will look at performance or results at various levels in program implementation, therefore under IE, program managers will be in a better position to understand the nature of the problem and how best to overcome them. This has resulted in greater innovation, better-targeted program delivery and more positive and committed attitude towards recommendations for program improvement. IE under MBS can be built in to enable the development of a framework for monitoring, formative (on-going) and summative (discrete) evaluation since they are interlinked through an Integrated Performance Framework (Program Agreement).
IE in Malaysia is a step-by-step approach for those involved in evaluation at the program level. A trained moderator will initialise it by way of workshop environment. It induces a new approach to evaluation by indulging and engaging stakeholder and operatives in serious discussion about results and using this information for program planning and implementation. Program officers with basic theory exposure or training in evaluation can undertake the evaluation.
Details of the implementation stages can be seen in the following chart.
Implementation of IE
The implementation of Evaluation is guided through a three stage moderated workshop (See Chart 1) as follows:
- Part One; The Planning and Design Stage
- Part Two; Instrument Design, Testing and Validation
- Part Three; Data Analysis and Report Writing
Part One requires that that the evaluation team prepare an Action Plan that will be submitted to the Program Evaluation Steering Committee. A series of simulations are carried out to discuss the program at length so that all participants have satisfactory knowledge and understanding of the program logic and other attributes of the program. The process of discussion will assist individuals to understand the program within the larger context and what their role is in determining the program results. The key output from the simulation is the articulation and recording through consensus of the Key Evaluation Questions in the Action Plan[3].
Part Two will guide the evaluation team through a process of simulation on measuring the attributes from the various evaluation questions that were generated from the first workshop. Data sources and the type of instruments needed to extract the data will be identified and drafted. The finished instruments will then be tested for validity and admissibility. A detailed plan for the collection of data will be prepared. The instruments, with its findings form the test and the detailed data collection plans will be submitted to the Program Evaluation Steering Committee for deliberation.
Part Three will be technically most demanding. Data analysis (using statistical packages) will be carried out through a series of simulations where data will be analysed and organised to answer the relevant evaluation questions. Based on the findings of the analysis the draft report will be prepared and submitted to the Program Evaluation Steering Committee for deliberation. In all three stages the Steering Committee will rigorously check on the quality of the outputs. After the third stage the committee will recommend for the report to be submitted to the management meeting for consideration. Form this stage on the Steering Committee becomes a stakeholder in the evaluation and is required to defend the report at the management meeting.
To ensure that the quality and integrity of the evaluation is maintained, the moderation process as shown in the previous flowchart is guided and governed by the stages in the evaluation flowchart[4] above. The flowchart is divided into four segments, starting with the Preliminary Planning and Assessment Stage, Evaluation Design Stage, Data Collection and Analysis Stage and finally the Reporting and Information Utilisation Stage. All these components have been carefully worked into the contents of the three moderated workshop.
Some Problems in the Implementation of Evaluation
Continuous Capacity Building; is the greatest challenge in keeping the momentum going. Staff turnover is one of the main concerns in ensuring the evaluation is undertaken continuously because capacity needs to be developed before evaluation can be carried out.
Evaluation perceived as an external event; is a mindset thing. Traditionally, program managers have been involved only in program planning and implementation while evaluation was perceived as an externally driven form of audit. Evaluation was never perceived as part of the program implementation cycle.
Accurate and credible institutional data; very often data is only sought when evaluation is being carried out. Data for evaluation is often not collected posing a serious problem in carrying out good evaluation
Top management commitment; towards performance information is insufficient especially at the outcome and impact levels
Establishing clear accountability lines in the current structure is a challenge.
Conclusion
The IE model has been improved on an incremental basis over the last few years and is being applied successfully in carrying evaluation in the Malaysian Public Sector Program. Apart from obtaining information for program and policy improvement, IE has had significant impact on capacity building and program knowledge enhancement. Studies carried out on the Ministry of Education clearly show participants are of the opinion their involvement in evaluation has significantly increased their ability to analyse, understand and plan programs within the context of client needs and objective attainment.
Koshy Thomas
Financial Management Advisory Division
Ministry of Finance, Malaysia
1
Koshy Thomas, Ministry of Finance, Malaysia
[1] Some aspects adapted from paper presented by Dr. Arnold Love at the Malaysian Evaluation Society Annual Conference, Kuala Lumpur 2004
[2] The Program Evaluation Steering Committee is set up at the Agency level, chaired by the Deputy Secretary General/Deputy Director General of the Ministry/Department with prescribed members with the primary role of determining the validity and to ensure impartiality of the evaluation.
[3] The Action Plan; is a predetermined format used by all Agencies to propose evaluation (equivalent to a research proposal). This will ensure that all the requisite information for the Steering Committees deliberations are met
[4] Arunaselam Rasappan and Jerome Winston, Training Manual for Public Sector Evaluation, ARTD (Malaysia) and PPSEI (Australia), 2000