IPMN Workshop Paper Abstracts

IPMN Workshop Paper Abstracts

IPMN Workshop paper abstracts

This is a Word document downloaded from the ESRC Public Services Program website,

This document contains abstracts of the following papers:

  • Hood et al: Rating the Rankings: Assessing International Rankings of Public Service Performance
  • Van de Walle: Constested Definitions of Performance: Evaluating the Performance of Services of General Interest
  • Wilson and Piebalga: Accurate Performance Measure but Meaningless Ranking Exercise? An analysis of the new English school league tables
  • Arndt: The Politics of Governance Ratings

DRAFT: PLEASE DO NOT QUOTE WITHOUT AUTHORS’ PERMISSION

Rating the Rankings: Assessing International Rankings of Public Service Performance
Christopher Hood, Craig Beeston and RuthDixon
University of Oxford

Paper prepared for the IPMN Workshop, ‘Ranking and Rating Public Services’, WorcesterCollege, Oxford, 7-9 August 2007

Abstract
Are international rankings of governance and public services a useful tool for measuring public management performance, or just an opportunity for attention-grabbing on the basis of spurious data? To explore that question, this paper begins by describing the growth of such rankings over recent decades and discussing some of the different types. In the second part, we test the robustness of the rankings approach by applying the ranking method to rankings themselves. In this part of the analysis, 11 well-known international governance and public service rankings are rated relative to one another on six criteria relating to validity and reliability. It is difficult to imagine that any ranking of governance or composite public services would be less complicated than this exercise, and our analysis brings out the size of the confidence interval problem, the extent to which rankings alter with small changes in the weighting of their various components, and the real difficulty of producing valid and reliable indicators of validity and reliability themselves. The third section attempts to reconcile the seemingly inexorable rise in demand for governance indicators with the deep perplexities associated with the methods. Would it be better to scrap such assessments altogether? Is some information, however imperfect, better than none? Are there ways in which the validity and reliability of rankings might realistically be improved in the future, particularly by kitemarking the rankings themselves?

Constested Definitions of Performance: Evaluating the Performance of Services of General Interest*
Steven Van de Walle

INLOGOV - School of Public Policy
University of Birmingham
B15 2TTBirmingham

Abstract
In this paper, I focus on the difficulties in evaluating the performance of so-called services of general interest. These services generally include such services as water and electricity supply, telephony, postal services, and public transport, where providers are subjected to certain universal service obligations. Because of the tensions between European internal market requirements and these universal service obligations, there exists considerable debate on the criteria to be used to evaluate the performance of these services. In addition, the status of these public services as ‘public’ or ‘essential’ services is disputed. These services create and reflect identities and public values. I suggest that these services of general interest can be studied as cultural phenomena to learn more about administrative values and value change in European countries.

Keywords: Services of General Interest, public service values, liberalisation, universal service delivery

* This is a draft. This topic will be the object of further research within the framework of my ESRC Public Services Programme
Fellowship on ‘public attitudes towards services of general interest in comparative perspective’ (Oct 2007-Nov 2008.)

DRAFT: PLEASE DO NOT QUOTE WITHOUT AUTHORS’ PERMISSION

Accurate performance measure but meaningless ranking exercise? An analysis of the new English school league tables
Deborah Wilson* and Anete Piebalga
CMPO, University of Bristol

Paper prepared for the IPMN Workshop , ‘Ranking and Rating Public Services’, WorcesterCollege, Oxford, 7-9 August 2007

Abstract

Parental choice among schools in England is informed by annually published school performance (league) tables. The 2006 league tables included a measure of contextual value added (CVA) for the first time. By explicitly accounting for the characteristics of a school’s intake, CVA should provide a more accurate measure of school performance, or effectiveness. In this paper we use UK government administrative data to replicate CVA and other key performance measures in order to investigate the extent to which the current league tables provide the information necessary to support parental choice on the basis of school effectiveness. We find that while CVA does provide a more accurate measure of school performance, school rankings based on CVA are largely meaningless: almost half of English secondary schools are indistinguishable from the national average. We suggest an alternative way of presenting the CVA measure to provide meaningful, comparative information on school performance.

*corresponding author: CMPO, University of Bristol, 2 Priory Road, BristolBS8 1TX, UK. Tel: +44 (0)117 331 0821. Email:

PLEASE DO NOT QUOTE WITHOUT AUTHOR'S PERMISSION

The Politics of Governance Ratings*
ChristianeArndt
MaastrichtGraduateSchool of Governance and HarvardUniversity

Paper prepared for the workshop 'Ranking and Rating Public Services’ of the International Public Management Network in Oxford, 7-9 August 2007
Draft July 2007

Abstract

Rapidly rising attention to the quality of governance in developing countries is driving explosive growth in the use of governance 'indicators', for the purpose of both aid-allocation and investment decisions, and for academic analysis. International organisations play a leading role, both in the use and the supply of governance indicators. This paper attempts to explain i) the reasons for this role, ii) the problems associated with the most popular indicators produced by international organisations, and iii) the reasons for the widespread misuse of these indicators. It argues that, while there will never be one perfect governance indicator, the production and use of more transparent governance indicators will better serve the needs of users and developing countries alike.

*This paper builds on and draws extensively from the book 'Uses and Abuses of Governance Indicators' by Arndt and Oman (2006). It bases many of its findings on interviews with donors, risk analysts, academics and OECD and World Bank staff who requested anonymity but whose assistance was invaluable. This paper benefited from enormously valuable suggestions and comments from Denis de Drombrugghe and Chris de Neubourg at the Maastricht Graduate School of Governance, Charles Oman at the OECD Development Centre, Stephan Knack and Nick Manning and other, anonymous, commmentators at the World Bank and Simon Kaja at the University of British Columbia. The author is solely responsible for the views expressed in this paper.

Posted August 2007