Bringing Political Science and APSA into the Discussion

Michael Brintnall, American Political Science Association

Notes for the APSA-Berkeley Democracy Audits and Governmental Indicators Conference October 30-31, 2009

Democracy audits and governmental indicators inherently bridge worlds of scholarship and practice. One objective of this working conference on audits and indicators is to lay groundwork for ways that the political science community, and the American Political Science Association in particular, can work in both worlds – enhancing the integrity of the measures and advancing their practical application.

Much of this conference is about the sources and uses of governance indicators, and about the intellectual challenge of making them better. But this will be incomplete without a self-conscious discussion of how the scholarly professional community can best organize to support this discussion, without simply hijacking these indicators into the ivory tower.

In planning to move forward, there are several steps to consider:

What is the framework for immediate and practical next steps – e.g. an APSA Task Force

What are long-term options and strategies – e.g. an APSA Institute; and

How can we shape the sort of town-gown partnership that we may find between political science scholars and audit/indicator users to form a productive climate for moving ahead.

Immediate Framework

Since 2002, the APSA has supported a series of presidentially appointed Task Forces focused on major public issues. The objective was to speak boldly and with relevance to important matters, without taking a particular stance or become partisan.

There is a loosely articulated structure for this task force work within the Association, and modest budget. The task force is expected to have balance across many dimensions and must pass muster with the APSA Council as a credible, consequential, and balanced effort. The credibility of the Association backs up Task Force work, though results are expressed as products of the Task Force of the Association, not of APSA itself.

Results then are presented to the political science community at the Association Annual Meeting, and a plan is developed for public dissemination and discussion. One common product is a succinct document that is publicly accessible explaining the issues for a general audience. Teaching materials are a common product. Press and legislative presentations are also customary. This framework can be adapted for the indicator discussions, and the Task Force provides one means to maintain momentum with the work.

Longer Term Options

Indicators and audits are a partnership between intellectual communities and user communities. Sometimes these communities are co-terminous, but the conflicting emphases and directions of the two communities will still emerge. Intellectual communities themselves have inherent conflicts – varied disciplinary emphases for example. User communities can be quite varied, and function with very different positions of authority over the resulting applications of indicators. The World Bank for example, on the one hand, via the World Bank Institute, has the internal capacity both to develop and deploy indicators, though of course it is open to cross-pressures from the member countries that govern it as well.

Other user communities are much less autonomous and in less of a position to constrain the construction, application, or interpretation of the indicators that they focus on. In particular these groups may be more dependent on the face validity of the indicators they use, and be pressured toward intuitively applicable measures rather analytically derived ones.

In either case, longer term, the construction, operation, and evaluation of indicators and audits relies on the stuff of political science: trust, institutional relationships and support, transparency, and basic public credibility -- as well as on evidence and research expertise. To matter, indicators must not only be reliable and valid, they must be trusted and convincing.

Trust is of course one of the building blocks of good work among scholars. In the case of indicators, it is also essential between scholars and those who use and rely on indicators and audit results. The trust needed to build working relationships with users has two constituent elements – trust that scholars have the requisite expertise to improve the work, of course, and also trust that scholars are doing something that makes sense – e.g. that the value of face validity can be respected as well as construct validity, that the work is in fact, and in perception, non-partisan and ideologically free, and that the work can be useful for the people it’s intended for.

One way to help build or protect this trust is through a mediating role by an honest broker. The American Political Science Association may be able to serve in this broker role on a long term basis. One purpose of the Indicators Conference is to begin to explore just what tools would be needed for a user friendly, but rigorously constructed, workshop for indicators, in which user communities can articulate objectives for indicators, and get both normatively and empirically oriented recommendations in return.

The Association might function as a clearinghouse of relevant expertise, forming a panel from its membership relevant to tasks that are brought to it. This is analogous to work done, for example, by the National Academy of Public Administration in its project panels; or by the one time project of the National Association of Schools of Public Affairs and Administration in support of the Environmental Protection Agency. In both cases, public objectives are married with independent professional or academic expertise drawn topically from a very large community of scholars.

Conceptualizing Partnerships

While the practical aspects of building partnerships with users are complex enough, there are some difficulties inherent in, or attributed, to academic perspectives that may interfere with partnerships with users pertaining to indicators. User communities – government, NGO’s, media, may operate in different cultures from academia. Peter Szantonspeaks of "two cultures" that divide government and academic, in terms of ultimate object, time horizon, focus, mode of thought, mode of work, most valued outcomes, mode of expression, concern for feasibility, and stability of interest, etc.[1] This can pose obstacles to collaboration on the kinds of challenges posed by indicators, which must be credible and effective in both cultures.

One of these obstacles is the perception that political scientists or social science in general, are the “veto players” -- that our work is about what cannot be done, or about what is flawed. This reputation has been cited informally already in the conversations running up to this conference. To make an historical reference, in the years of the War on Poverty in the United States, in a period of great social and political invention, there was a following sense of disappointment that real achievement did not occur. Subsequent analysis has noted that one enabling factor for the climate of seeming failure was the academic environment of critical evaluation that had focused on what did not work, rather than what might move results forward.

We oughtn’t make the same mistake, and need to find ways to be additive in the improvement of indicators and audits, rather than a voice simply of “don’t do it that way.” This is especially difficult in contexts in which users may themselves have overblown expectations of what indicators mean and what they can achieve.[2] It is indeed a delicate task to help move forward work that must in other ways be reined in.

Another concern frequently cited is that relevant evidence will be too narrowly defined and that the scholarly focus will predominantly on the measures that can be quantified and aggressively standardized. While this threat is attributed more often to the indicators work of economists than of political scientists, it is a concern we all share.[3] There needs to be room for nuance and values as well as well as standard measures, in our discussions of where to head. This too, can be difficult in contexts in which “nuance and values” may be shaped by ideologies and partisan expectations that are precisely the distortions in understanding that indicators are meant to overcome.

So our challenge in these conversations are multi-dimensional, and call not only for better understanding of how to make abstract concepts of governance, freedom, and the like operational, but also to consider best ways to engage scholarly analysis with multiple publics.

1

[1]Pearson, Robert W. 2005. Book Review Essay: the Uneasy Partnership between Social Science and Public Policy. The Annals of the American Academy of Political and Social Science 600:157-173.

[2] Arndt, Christiane, 2009. Governance Indicators. Maastricht University, dissertation. p. 180.

[3]Lynn, Laurence E., Jr. 2001. The Making and Analysis of Public Policy: A Perspective on the Role of Social Science. In Social Science and Policy-Making: a Search for Relevance in the Twentieth Century, edited by D. L. Featherman and M. Vinovskis. Ann Arbor: University of Michigan Press.