- 1 -
Task Team Report:
Statistical Capacity Building Indicators
Lucie Laliberté
Report of the Task Team on Statistical Capacity Building Indicators
to the
Steering PARIS 21 Committee meeting, June 13, 2002
Lucie Laliberté, Chairperson
Introduction
1. The mandate of the PARIS21 Task Team on Statistical Capacity Building (SCB) Indicators[1] is to develop by October 2002 indicators that will facilitate tracking progress of countries in building their statistical capacity.
2. This constitutes the first systematic attempt to develop indicators of statistical capacity building applicable across countries. It was prompted by the explosive and pressing demand that originated from diverse quarters over the last few years. Keeping score, as indicators do, can be very powerful in galvanizing the energy to improve a situation. This kept driving the efforts since developing indicators is neither a quick nor a simple task.
3. Among the converging trends spearheading the demand was the emphasis put on statistics by the new “evidence-based” approach of the internationally agreed development goals to reduce poverty. Further, the international financial structure that underpins globalization also places a premium on timely and accurate information, with national statistics increasingly taking on the features of an international public good. The case now increasingly being made for greater resources for improved statistics also prompted a realization that much more needs to be known about just what statistical capacity is, how needs can be determined, and how progress can be measured. This is especially so for technical assistance that faces more than ever pressing calls for accountability. Everyone wants to know what are the results of technical assistance? How do they compare to the resources allocated? What are the lessons learned? How best to move forward? Donors want measurable results, and national authorities want to know whether the results warrant their own resources. Mrs. Carson, Director of the IMF Statistics Department, summed it all up at the PARIS21 Seminar on statistical capacity building indicators[2]:
“The time is ripe to look seriously at the question of statistical capacity, statistical capacity building, and indicators of statistical capacity building indicators.”
4. Indicators have to be distilled from the long and complex statistical process that necessarily precedes data dissemination. Adding to the complexity is typically the large and diverse number of agencies involved in a country’s statistical system, and the widely varying scope and quality of the data they produce.
5. Given this complexity, the Team used a multi-step strategy that features both systematic and consultative approaches. A methodology was adopted (first step) to describe the full gamut of statistical activities (second step). From this framework, indicators were derived (third step), and tested (fourth step). The first two steps have been fully documented elsewhere[3] and this paper describes the activities under the third and fourth steps. A first section explains the approach that led to the indicators, inclusive of their testing.[4] A second section describes the indicators of statistical capacity building. A third section explores what is next. A fourth section concludes.
Part One: The road to the selection of the indicators
The description of the statistical activities
6. The Task Team put together a description of statistical activities in The Framework for Determining Statistical Capacity Building Indicators. This provided for a structured approach to statistical operations that spanned external environmental (law, users, respondents, and financing), structures, processes and functions (outputs). The throughput statistical process along with the management and technical support were taking into account. Further, by focusing on the components that were common to all statistical systems, to all data producing agencies, and to all statistical outputs, the Framework abstracted from countries’ particularities. This ensured for the comprehensiveness and the universality of the Frameworkfrom which the indicators were to be derived.
How to arrive to the identification of the indicators?
7. The next step consisted in extracting indicators. The six-part structure[5] of the description proved extremely valuable in setting up the broad dimensions (e.g., relevancy and accuracy) from which to extract the variables the most relevant to the statistical activities and on which the indicators were to be based.
8. The indicators needed to be:
- relatively easy to apply,
- with measures kept as simple as possible
- the cost of which not to exceed the value of the information generated
- and with outcomes that would be consistent across various data producing agencies in a country, among countries, as well as in time.
9. This called for an interactive process where the criteria for the indicators, the variables selected, and their measures mutually influenced one another. In certain cases, the tendency was to focus on variables that had objective measures, but that were not necessarily relevant (e.g., the response rate of surveys that could have been ill-designed in the first place). In other cases, the measurement of relevant variables simply came from their operational definitions (e.g., defining timeliness as so many periods after the reference period effectively provided the measure of timeliness). For other variables, the concept was very simple (e.g., effective statistical law), but its operational definition proved vague and ambiguous. Often, a direct measurement was not possible, and a measure could only be coproduced by multiple variables. However, objective measures for joint variables were seldom available. This led to measures whose subjectivity had to be minimized. This was dealt with using ordinal scale to which benchmarks were subsequently provided for each of the four scale level. The testing of the indicators in two countries helped in further refining the indicators.
10. The interaction and inherent complexity of the process led to producing the indicators in two successive phases. The first phase indicators focused exclusively on process. In the second phase, these indicators were supplemented and consolidated in indicators that not only encompass process, but also span resources and outputs.
The first phase
11. The indicators that were arrived at in the first phase were applicable to any data producing agencies and any statistical products. They were a tool to be applied to a given output by a data producing agency. The intent was that the results would determine the capacity of that agency to produce the output in question. To the extent that they were to be applied to outputs deemed to meet major user needs, they were viewed as demand-driven indicators.
12. The sequence (please refer to Table 1) to derive these indicators was as follows. A first list of over 100 potential candidates was identified from The Framework for Determining Statistical Capacity Building Indicators. They represented a mixture of finite statements that describe the statistical activities, and of performance indicators selected from relevant literature on performance of data producing agencies. These were the subject of intensive consultation with multilateral donor agencies and led to a second list of over 100 indicators, from which 34 were selected on the basis that:
- they were understandable by statisticians, resource providers and users;
- they covered a sufficiently broad range of statistical operations and, hence applied to a variety of statistical circumstances;
- they were amenable to some kind of measurement or comparison against best practices and, as such, could be ranked according to a four rating assessment scale (Observed, Largely Observed, Largely Not Observed, Not Observed).
Table 1: The first phase of development of Statistical Capacity Building Indicators
Dimensions / Elements / Potential candidates / First phase
0. Prerequisites / 0.1 Legal and institutional environment
0.2 Resources
0.3 Quality awareness / 28
11
7 / 32
44
16 / 6
9
1
1. Integrity /
1.1 Professionalism
1.2 Transparency.1.3 Ethical standards / 6
6
3 /
9
83 /
1
11
2. Methodological soundness / 2.1 Concepts and definitions
2.2 Scope
2.3 Classification/sectorization
2.4 Basis for recording / 2
1
1
0 / 3
2
2
0 / 1
1
0
0
3. Accuracy and reliability / 3.1 Source data
3.2 Statistical techniques
3.3 Assessment and validation of source data
3.4 Assessment and validation of intermediate data and statistical outputs
3.5 Revision studies / 5
5
3
3
1 / 11
6
3
5
2 / 1
0
2
1
0
4. Serviceability / 4.1 Relevance
4.2 Timeliness and periodicity
4.3 Consistency
4.4 Revision policy and practice / 7
5
3
3 / 11
7
3
4 / 2
1
0
1
5. Accessibility / 5.1 Data accessibility
5.2 Metadata accessibility
5.3 Assistance to users / 11
5
4 / 16
6
9 / 2
1
2
13. The usefulness of these indicators was in measuring the internal organization health and efficiency of any data producing agency. This was done by assessing how the resources were used, and how the internal activities of the agency mesh with one another. Many aspects of the inputs and processes not being quantifiable, an ordinal scaling accompanied the indicators.
14. The Seminar on Statistical Capacity Building Indicators showed the need to go beyond indicators that focused exclusively on processes. Main limitations of process indicators were that they were not specifically attached to actual statistical outputs, or inputs or the environment, and they left much room to subjective interpretation as the ordinal scale was not benchmarked.
“Finally I do think that the choice between assessing the process and assessing the reality is not to be taken lightly. It is much more direct to assess performance and recognise good performance than to assess whether there is a process to review performance. If the process is good but the real performance is poor and this is too common a theme of the indicators then it will discredit the results”[6] (Tim Holt).
The second phase
15. In the second phase, indicators were devised with a view to be more entrenched in the circumstances of countries. This was done at two levels: at the statistical system level (referred to as section A indicators) and at the data producing agencies for specific outputs (Section B indicators).
16. Devising indicators at the level of the statistical system (Section A indicators) was challenging at least on two counts. First, statistical system is a notion difficult to define since it is not represented by a single organization. It is more of an intellectual construct, made up of the summation of organizations that vary among countries in number, structure and the authority to which they report. Further, the dividing between data producing agencies/units and ministries producing administrative data differs not only among countries, but would also vary in time as a result of institutional changes within a country. The composition of the system varying across countries, the identification of its constituting agencies for indicators purposes was left to the country’s data producing agency with the most responsibility for statistical coordination in the country in question. The second challenge was to identify quantifiable characteristics that were common to all data agencies. This was done by focusing on qualitative indicators that were based on resources and on outputs of the constituting data producing agencies.
17. Section B indicators also gave rise to two broad types of challenges, which largely stemmed with keeping the application cost of the indicators manageable: not all agencies could be assessed nor could their products. The first type of challenges involved the selection of data producing agencies and of the statistical outputs to be assessed. The choice was driven by agencies that produce statistical outputs that were deemed to be the most relevant. This was easier said than done. What criteria to use to select relevant outputs? Do they need to be relevant to major current issues? Are the current issues at the level of countries or that of the international community? Which products to choose and why? What results are expected and why? Are these products within the manageable limits of countries’ circumstances? These issues were dealt with by narrowing the choice to a very limited number of broad statistical domains, and selecting a representative statistical output for each domain. Narrowing the domain and the statistical output provided effectively for a sample of representative agencies and outputs, and was in keeping with the low cost. The indicators were derived by building upon the process indicators that had been identified in the first phase. This large number of individual indicators (with no benchmarks) needed to be traded off against a smaller number, which meant more complex indicators and greater difficulty of assessment. The first phase process indicators were therefore bundled into a more limited number, and each bundled indicator was provided with benchmarks to describe each of the four-scale level.
Part Two: The Statistical Capacity Building Indicators
18. The Task team’s comments and subsequent countries’ testing of the two broad types of indicators helped tremendously in sharpening the content and the format of the indicators (please refer to Table 2).
19. In terms of content, the resulting Section A indicators, which pertain to the statistical system, provide a rough idea of the depth and breath of statistical activities at the country level. They quantify elements of resources, inputs and statistical products. Resources include domestically and externally funded annual budget and staff, as well as selected equipment. Inputs are data sources and they are measured in terms of surveys conducted and of administrative data used over a three-year period. Statistical products are assessed by the modes/channels of data releases (publications, press releases, website, etc. ), and areas of statistics produced (with 10 specific statistical outputs listed). A list of the data producing agencies and of the external donors covered in Section A indicators is also provided. In total, the 18 indicators of Section A, being largely quantitative, give an idea of the size of the statistical system, the extent of external financing, the number of surveys and administrative data used as data sources, and the diversity of the statistical outputs. They also provide for a snapshot comparison of the statistical capacity among countries.
Table 2: Statistical capacity building indicatorsSection A: System wide quantitative indicators
Section B: Data producing and statistical output qualitative indicators
0. Prerequisites:
0.1 Collection of information and preservation of confidentiality guaranteed by law and effective
0.2 Effective coordination of statistics
0.3 Staff level, expertise and internet support adequacy
0.4 Building and equipment adequacy.
0.5 Planning, monitoring and evaluation measures implemented
0.6 Organizational focus on quality
1. Integrity
1.1 Independence of statistical operations
1.2 Culture of professional and ethical standards
2. Methodological soundness
2.1 International/ regional standards implemented
3. Accuracy and reliability
3.1 Adequacy of source data
3.2 Response monitoring
3.3 Validation of administrative data
3.4 Validation of statistical outputs
4. Serviceability
4.1 User consultation
4.2 Timeliness of statistical outputs
4.3 Periodicity of statistical outputs
5. Accessibility
5.1 Clarity of dissemination
5.2 Updated metadata
20. Section B indicators involved in three broad domains of statistics: economic, demographic and social. For the time being, the statistical outputs that were selected as the most representative of each domain were respectively the GDP, the average life expectancy at birth, and the poverty incidence. (these could be subject to changes after consultation with experts in these three domains). For each of these three representative statistical outputs, 18 qualitative indicators capture aspects the statistical production process. Each indicator is assessed according to a four level ordinal scale, with detailed benchmarks provided for each level.
21. In terms of format, the indicators are presented as a two pages table. The table is part of a survey questionnaire which provides extensive directions on how to report the information on the table, inclusive of descriptive benchmarks on the four scale assessment of Section B indicators. In order to facilitate the understanding on how to complete the questionnaire, the possibility is currently being explored to attach as models some of the results from the testing exercises.
22. The format of the table was designed in such a way as to be readily useable as output.
Part Three: What is next?
23. A number of steps are essential to successful indicators of Statistical Capacity Building. Some of them are briefly reviewed below.
Finalize the questionnaire on SCB indicators
24. Further consultation and testing will be required to finalize both the content and format of the questionnaire. For instance, the IMF intends to further test the questionnaire in its work on technical assistance. During the PARIS21 workshops, the questionnaire could be further assessed.
25. This should be done with a view to maintain the requirements set at the beginning of the exercise: relatively easy to measure and to apply, not too costly, and with reproducible outcomes. At the same time, the information requested should be sufficiently generic to provide an overall view of a country’s statistical conditions, and sufficiently specific to support decision making at given levels.
26. Recognizing that applying the indicators requires considerable logistics in terms of cooperation, coordination and integration a country’s data producing agencies, much care was taken at this stage to limit the amount of information requested so as to facilitate the implementation of the indicators. At the system wide level, the information was largely quantitative; and for the qualitative information, the coverage was limited to a sample of data producing agencies with the focus on representative products. While further consultation will lead to modifying the questionnaire, the constraints for managing reporting burden should keep guiding the steps toward finalizing the questionnaire.
Modalities for applying the questionnaire
27. The administration of the questionnaire would have to be coordinated at the national level as well as at the international level. As noted earlier, administering the questionnaire at the national level could be devolved to the data producing agency responsible of coordinating statistical activities at the country level.
28. At the national level, the SCB indicators could be of special interest to countries[7] that
- have major deficiencies in available statistics and require sizeable statistical capacity building and fundamental changes to improve statistical operations; and
- cannot develop their statistical capacity without external assistance because of limited domestic resources
29. The usefulness of the indicators will be enhanced from their results being available for a critical mass of countries. This would involve some form of coordination at the international level.