Policy Impact of Knowledge and Knowledge Organisations
Policy impact of knowledge and knowledge organisations:
from understanding impact towards measuring it
Tuesday 20 June 2017
REFLECTIONS AND THE WAY FORWARD
(Collaborative document of the workshop participants)
On 20 June 2017 the European Commission's science and knowledge service, Joint Research Centre (JRC) brought together some 110 key practitioners, on both the knowledge supply-side and the policy demand-side, to identify what is meant by the impact of research on policy and how to best measure it.
The workshop explored the impact of knowledge on policymaking from both the accountability and learning perspectives, the latter especially important to strengthen the impact of knowledge on the policy process. An additional objective was to bring together a community of practitioners in the field and agree on necessary steps to take the discussion of knowledge impact on policies further.
The morning focused on what is meant by policy impact, why we should measure it, and how to reconcile the need to demonstrate an impact on policy and the need to achieve it in practice.
The afternoon focused on qualitative & quantitative approaches to measuring impact, including what are main strengths and weaknesses of current metrics, how do we avoid bias in measurement and where are the limits to measuring impact on policy.
The presentations of the speakers can be found on
The workshop was designed as an exchange of ideas, knowledge and experience, geared towards:
- Coming to a consensus of a definition of an impact of knowledge on policies;
- Identifying the most important issues/challenges in measuring impact;
- Identifying the most promising indicators to measure impact of knowledge on policies;
- And gathering suggestion on what the JRC and the community should do to take this discussion forward.
1. Definition of knowledge impact on policies
The participants expressed several different opinions as to what impact of knowledge on policies constitutes and how to frame it. Research can influence policy in terms of attitudinal change, procedural change, in terms of policy content and in behavioural change. They also recognised that there could be substantial differences between influence and impact and that in some cases influence on the thought process of the policymakers is imperceptible and thus probably unmeasurable. Participants cautioned that the impact of a knowledge organisation/entity operating within government and those outside of it could be quite different. Every knowledge organisation was sui generis and therefore would require different context-specific definitions. Nevertheless the common consensus was that:
The impact of knowledge on policies is an observable change at any level of the policy world and at any time of the policy cycle which is a result of the interaction between knowledge and policy.
2. Challenges for assessing and measuring impact of knowledge on policies
Research organisations are increasingly required to demonstrate the beneficial outcomes or wider effects of their work on the economy, society and policy making. At the same time, policy makers are encouraged to pursue evidence-informed policy. Much of the discussion about policy impact involves understanding how these two different systems, ‘research’ and ‘policy making’, and their associated organisations, can meaningfully be aligned in order to produce mutually beneficial outcomes. This is about thinking of ways in which systems with a range of overlapping objectives can work together in a given context in order to produce better processes and outcomes for both. However, there are important considerations, challenges and complexities which emerge when trying to understand what this means in practical terms.
The major challenges and issues associated with assessing and measuring impact of knowledge on policies are:
- Researchers and policy makers inhabit different worlds and it can be difficult for both groups to understand and navigate each others’ motives and ways of operating.
- Much of current policy is not based on robust empirical evidence. At the same time, a great deal of (funded) research is not set up in order to help address policy questions or does not produce data that could be shared easily with other data sets for such purposes.
- It is necessary to recognise that scientific evidence is mixed up with political interests and many other factors when taking a decision. Biases also play a substantial role in the behaviour of both the scientists and the politicians. In order to change practices on both the policy and scientific sides, greater degrees of transparency would be required.
- Policy makers might seek to consolidate data from a variety of disciplines in order to address a policy question, whilst academic research and dissemination is mostly organised around strict disciplinary boundaries.
- Policy cycles and research cycles are rarely aligned. Policy makers often require immediate answers to complex questions; research inquiry means spending a longer time addressing specific, focused questions in depth.
- Whilst it is accepted that policy makers deal with ‘contested’ issues, most research is also not about providing ‘facts’ but is about increasing overall understanding of contested issues.
- Achieving a meaningful objectivity in the process of measuring impact will be quite difficult since most of the information on the level and intensity of interactions with the policymakers will be provided by the knowledge organisation/broker/scientist.Misrepresentation of impact could be a real challenge.
- There is a need to map differentpotential pathways of policy impact and there is apotential danger of taking a reductionist approach or altogether measuring the wrong indicators.
- There are many sources of information,different data gathering techniques and the level of granularity of information could be quite different. This could pose substantial challenges when measuring and comparing impact on policies.
- There is an almost complete mismatch between information sources for researchers and policy-makers. While researchers need to cite their sources, policy-makers have multiple channels of information and sources are almost never attributed. There was a difference of opinion about whether it is desirable or achievable for researchers to try to persuade policymakers to cite their sources in a way equivalent to research.
- Attributing a policy change as a result of the work of a single knowledge organisation, scientists or a research report would be challenging if not impossible. The potential benefits of such precise attribution may be outnumbered in some cases. Considering the effect of the whole knowledge ecosystem on a particular policy field could be a more reasonable and appropriate response in such cases. In other cases, the drive to discover and evaluate what is the impact of a knowledge organisation could be an important tool of self-reflection of this organisation, of learning and improving operations.
- The myriad diversity of possible pathways to make impact of knowledge on policies, the wealth of different actors (types of organisations, entities and individuals on the research side as well as the types of bodies and decision makers on the policy side) would make it extremely complex to measure the impact.
Fig.Diverse impact pathways (illustration of 3700 unique pathways in 6679 REF impact case studies)
3. Indicators for measuring knowledge impact on policies
The participants agreed that there is no one size fits all solution and that developing a basket of contextual and complementary indicators, both qualitative and quantitative, is the best approach to measuring impact.
Existing methodologies and promising new approaches:
- Qualitative assessment via structured descriptive case studies or written narrative accounts
- Policymakers surveys or expert panel evaluation (these should require common template for case studies and common methodologies to process cases)
- Citation rates / tracking knowledge in policy documents (counting and analysing mentions in policy documents of scientific and technical reports, policy briefs and position papers). The potential reductionist effect of chasing the paper trail is to be taken into account when using this approach.
- Quantity and quality of public and political debate. Mentions in parliamentary debates and in the media.
- Indicators for engagement (assessing the depth and frequency of engagement with policymakers). Example: number and level of decision making meetings the knowledge provider/scientist is invited to
- "Repeat business" or how often the policymaker returns to source of evidence
- Oral history – testimonials/network of influencers
- Membership of advisory groups – with both formal and informal invitations to contribute to the decision-making process. However it needs to be noted that sitting on adivisory committees is not a guarantee for impact.
- Submissions of opinions and reports at public/stakeholder/expert consultations as part of the official policymaking process likewise.
- Network and process indicators – not just the results should matter (quotes in policy docs, paper trail), but the process of coming to a policy decision (workshops, expert groups) and the network behind a policy decision (analysing communities of research)
- Recognised architecture for grey literature (developments are taking place with bibliometric sources, which are putting policy documents and other grey literature into databases in order to promote quantitative impact assessment)
Participants cautioned against drawbacks and limitations:
- There is no single indicator, no one-size fits all.
- There is a lack of a unified system for publishing policy reports – establishing a ‘DOI’ for policy documents.
- There is no adequate comprehensive repository for policy documents, which would allow for a range of metrics to be subsequently developed and utilised.
- Unintended consequences such as ‘gaming’ or encouraging bias towards easily demonstrated ‘impactful’ research have to be avoided.
- Possible negative consequences of a measuring system for impact that itself reduces the overall quality of research output have to be avoided.
- Tension remains within a system for capturing and recording impact, which rewards clear lines of demonstrable impact and disregards the messy realities of policy making processes.
The participants also described the desirable characteristics of the indicators:
- Highly contextual and dependent on objectives
- User-friendly, traceable and complementary
- Multidimensional, appropriate for a particular level
- Informing, allowing learning and reflection in order to improve the evidence informed policymaking
- Indicating reach and significance
4. Next steps: how to continue the work of the community and what role for the JRC
To ensure the role of research and the continuous support of the citizens to research we should limit the funding of science, which is not at all useful to society. Future research funding programmes need to have a built-in system and structure supporting governments and regions in their efforts to collect intellectual input necessary for solving challenges. It is imperative to produce knowledge, which is more impactful and more useful to policies and society.
The group identified the necessary next steps to take this discussion further:
- There is a definite need to gather more insights into knowledge impact on policies. The difference between informing and influencing policy has to be taken into account.
- Clarify what could be the contribution of influencers, intermediaries or knowledge brokers (better linking supply and demand)
- Develop guidelines on what these knowledge impact on policies indicators should look like, including advice on how context-specific organisations should adjust for their circumstances. Most indicators are contextual, partial and could be thus misleading. Understanding their context, limitations and applicability is thus monumental. Peer reviews and oral qualitative impact statements of policymakers are promising approaches and need more research. The process of interaction is equally important.
- Consider how the impact could be visualised and assessed.
- Further the collective understanding and theory of drivers behind impact
- Bring knowledge and expertise from different Member States and communities (social sciences, natural sciences, economics)
- Facilitate development of a database of grey literature – structure/standards + policies (distributed/interoperable)
The participants also put up a number of suggestions on how the JRC can facilitate these actions particularly by playing a proactive knowledge brokerage role, coordinating the community of practitioners and enabling the development of the theory, models and processes for measuring impact of knowledge on policies.