FOLIOz
Facilitated Online Learning as an Interactive Opportunity in Australia & New Zealand

Managing for Service Quality (MSQ)

Briefing #2 / September 2009

Performance Indicators

What is a performance indicator?

According to David Streatfield of Information Management Associates a performance indicator is “a statement against which achievement in an area of activity can be assessed” (1). Performance indicators are usually problematic and take time to develop. Streatfield quotes Gray and Wilcox in describing Performance Indicators as "socially constructed abstractions arising from attempts to make sense of complex reality (2)."

Why are they important?

Performance indicators underpin any successful performance monitoring system – they “define the data to be collected to measure progress and enable actual results achieved over time to be compared with planned results” (3). Thus, they are an indispensable tool for supporting management decision-making about service strategies and activities. Performance indicators can also be used to motivate staff involved in delivering a particular service and to communicate achievements to service stakeholders.

What are the characteristics of well-chosen performance indicators?

For a performance indicator to be considered important it should provide at least one of the following (1):

  • information about the performance of a system
  • information about the central features of a system
  • information on potential or existing problem areas
  • information that is policy relevant.

In addition the following seven criteria can be used to evaluate a candidate performance indicator:

SEVEN CRITERIA FOR ASSESSING PERFORMANCE INDICATORS (3)

1.DIRECT. A performance indicator should measure as closely as possible the result it is intended to measure. If a direct measure is not possible, one or more proxy indicators might be appropriate.

2.OBJECTIVE. An objective indicator has no ambiguity about what is being measured. That is, there is general agreement over interpretation of the results.

3.ADEQUATE. Taken as a group, a performance indicator and its companion indicators should adequately measure the result in question. The number of indicators required depends on the complexity of the result being measured, the level of resources available for monitoring performance, and the amount of information needed to make reasonably confident decisions.

4.QUANTITATIVE, WHERE POSSIBLE Quantitative indicators are numerical (number or percentage or ratio, for example). Qualitative indicators are descriptive observations (an expert opinion or a description of behaviour).

5.DISAGGREGATED, WHERE APPROPRIATE Disaggregating reader responses by clinical specialty, gender, age, location, or some other dimension is often important from a management or reporting point of view.

6.PRACTICAL. An indicator is practical if data can be obtained in a timely way and at a reasonable cost.

7. RELIABLE. A final consideration in choosing performance indicators is whether data of sufficiently reliable quality for confident decision-making can be obtained.

How do I produce performance indicators?(3)

Step 1. Clarify the results statements. Good performance indicators start with results statements that people understand and agree upon. You list criteria by which a service might be judged. This helps you to produce a list of specific qualitative statements (e.g. 'access to the library'; 'availability of training'). These statements then become the raw materials for creating indicators. As mentioned above, these must be capable of being assessed and, preferably, quantifiable. Streatfield suggests that you “start with the judgement criteria rather than with the indicators” (1).

Step 2. Develop a List of Possible Indicators. There are usually many possible indicators for any desired outcome, but some are more appropriate and useful than others. You should not simply go for the easiest “hits”. Instead you should brainstorm a list of potential measures and then “whittle these down” to the most appropriate.

Step 3. Assess Each Possible Indicator Next, assess each possible indicator on the initial list. Experience suggests using seven basic criteria [see list above] for judging an indicator's appropriateness and utility. When assessing and comparing possible indicators, it is helpful to use a matrix with the seven criteria arrayed across the top and the candidate indicators listed down the left side. With a simple scoring scale, for example 1-5, rate each candidate indicator against each criterion. These ratings give an overall sense of the indicator's relative importance.

Step 4. Select the "Best" Performance Indicators. The next step is to narrow the list to the final indicators to be used in the performance monitoring system. They should represent the optimum set that meets the need for management-information at a reasonable cost. Here you must not only consider the actual cost associated with data collection but also the value of the measure that will be collected.

Impact measures may be particularly resistant to quantification. You may have to settle for a number of measures that are sufficiently close to have a bearing on the criteria for success along with the more qualitative criteria themselves. For this reason Streatfield suggests that you may prefer to describe qualitative impact indicators as 'success criteria' (1).

Indicators (including success criteria) should:

  • be as few as possible
  • allow meaningful comparisons to be made over time
  • cover significant parts of the activities of the service (not all or even most, according to Gray and Wilcox (2))
  • reflect the existence of competing priorities.

What are the issues?

Three major issues should be taken into account in assessing whether performance indicator data is reliable and valid (4):

CONSISTENCY – obviously if performance is being compared across institutions data must be defined similarly and collected consistently in each institution. Even where data is only used within a single institution there is the possibility of inconsistency. To take one extreme, but realistic, example: suppose your staff are required to do a head count at 10 a.m. and 2 p.m. every day to determine occupancy of the library. How do they treat an over-worked student sleeping at a desk? One staff member may be doing a literal “head” count, the other may be only counting those who are “working” in the library. Most common situations may be covered by data definitions but this still leaves a surprising potential for variation.

EASE Versus UTILITY - Performance indicators are typically developed from data that can be gathered easily. Of course, what is easy to count is not necessarily what “counts”. It is tempting to set goals based on the data available rather than starting from the goals or objectives of the organisation and then defining the measures.

VALUES – related to the above point are the values that underlie a particular measure. Ideally the indicator should be in harmony with the values held by the library and its parent organisation. However a surprising amount of measures may contain perverse incentives that encourage performance “massaging” behaviours. For example an apparently rational measure such as the percentage of occasions that a reader is able to find a known item on the shelves may disguise poor borrowing statistics. Similarly reshelving times could be affected by a library assistant’s unilateral decision to sort books into classmark number order on the trolley resulting in differential statistics for classmarks at the beginning from the end of the alphabet.

Where can I find further examples?

EQUINOX: Library Performance Measurement and Quality Management System - Performance Indicators for Electronic Library Services. August 2009)

Canadian Health Libraries Association/Association des bibliothèques de la santé du Canada. Proposed Indicator of Library / Information Services Submitted by 2002-2003 Indicator Working Group (J MacDonald, N McAllister, S Powelson, C Woodward) Spring 2003 (Accessed August 2009)

Council of Australian University Librarians (2005) Performance Indicators – Links (Accessed August 2009)

Council of Australian University Librarians (2004) PERFORMANCE INDICATORS (Accessed August 2009)

References

1. Streatfield,D. & Information Management Associates (2003) Best Value and Better Performance in Libraries A: How library service managers can get to grips with assessing the impact of services: (Accessed August 2009)

2. Gray,J. and Wilcox,B. (1995) 'Good school, bad school': evaluating performance and encouraging improvement. Buckingham: Open U. P.

3. Performance Monitoring and Evaluation TIPS (1996) Number 6: USAID Center for Development Information and Evaluation. Selecting Performance Indicators: (Accessed August 2009)

4. Kyrillidou M (1996) Introduction In: Current Context for Performance Indicators in Higher Education: (Accessed August 2009)

Further reading

Association of Universities and Colleges of Canada (1995) A Primer of performance indicators. Research File; v. 1 no. 2 (June): p. 1-8: (Accessed August 2009)

Cram,J. (2005) Jennifer Cram: Papers on Performance Measurement: (Accessed August 2009)

Kena,J. (1999) Performance Indicators for the Electronic Library: August 2009)

Performance Measurement and Performance Indicators: A Selective Bibliography (Bibliography series #8) Canadian Libraries & Librarianship. Compiled by Douglas Robinson, May 2000: (Accessed August 2009)

Poll,R. (2008) The cat’s pyjamas? Performance indicators for national libraries. Performance Measurement and Metrics; 9 (2): 110-117. Available to ALIA membersonly via ProQuest at: [Accessed August 2009] Please note you will need to log in to the ALIA website at: to access this link.

Worthington,A. (1999) Performance Indicators and Efficiency Measurement in Public Libraries. The Australian Economic Review;32 (1): 31-42.