Nano-industry operationalisations of ‘responsibility’: Charting diversity in the enactment of responsibility

Abstract

The Collingridge dilemma – the problem of reacting to emerging technology either ‘too early’ or ‘too late’ – is one that is readily recognised by developers and promoters of nanotechnologies. One response can be found in the rise of a discourse of ‘responsible development’ in the science and innovation policy landscape. While a number of commentators have discussed the potential of such initiatives, it remains unclear how responsible development is actually being configured ‘on the ground’, in private sector nanotechnology. This paper addresses this question by analysing empirical engagements in Europe and the US in order to map industry operationalisations of ‘responsibility’ in these contexts. We show thata number of different articulations of ‘responsibility’ are present, including as a response to public lack of trust and perceived public pressure,and as the management of risk.We close by relating these findings to the theoretical literature on engagement, other contemporary accounts of the ways in which responsible development can be operationalised, and the possibilities that these articulations of responsibility may open up.

1. The rise of ‘responsibility’ in innovation and governance

The ‘Collingridge dilemma’ is well-known within innovation and technology studies. Named for the scholar who first outlined its central premise – that there is always a tension between ‘too early’ and ‘too late’ intervention into technological pathways – it has become a semi-paradigmatic summary of the difficulties of managing the policy of science, technology and innovation. Drawing together what are now familiar concerns with social implications, technological ‘lock-in’, and the nature of decision-making on technical issues, Collingridge writes that:

The social consequences of a technology cannot be predicted early in the life of the technology. By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economic and social fabric that its control is extremely difficult. This is the dilemma of control. When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming. (Collingridge, 1980, p. 11; emphasis in original)

For Collingridge, then – and the many scholars who have since drawn on his analysis (see, for instance, Fisher, Mahajan, & Mitcham, 2006; Liebert Schmidt, 2010; Nordmann 2010) – maintaining some degree of control over emerging technological developments poses a central, very practical challenge to the policy process. How to ensure ‘socialconsequences’ which are, at least, acceptable to the publics affected by them (a concern which has been a key feature of recent debates around the ‘democratisation’ of science, cf. Jasanoff, 2003) whilst dealing with profound uncertainties about what those consequences could be?

While the terms of this dilemma have been subject to some debate (Liebert Schmidt, 2010; Nordmann 2010), it is fair to say that it continues to vex those who study, and practice, science policy. Recent interest in ‘upstream engagement’ can, for instance, be understood as one response to the balancing act between the temporalities of societal values and scientific knowledge (Wilsdon Willis, 2004). Liebert and Schmidt (2010), in their analysis of the Collingridge dilemma’s implications for technology assessment (TA), emphasise Collingridge’s own solution to the problem: designing controllability and flexibility into technologies such they can be monitored and steered in a way that, in effect, means that the dilemma never becomes applicable. Doing this, Liebert and Schmidt write, means that“controlling is feasible because we should develop technologies that can be controlled” (p.63; the tautology is deliberate in that they describe this as a “normative [or, we might say, virtuous] circle”). The rise of governance approaches to (science) policy can also be seen as a response to the problem of when to act within political systems (Hajer Wagenaar, 2003). Here, distributed methods of guidance and control “substitute for the systematically discredited top-down command and control form of authority that had been previously exercised by centralized governments” (Shamir, 2008, p. 6); in theory, at least, governance mechanisms for the steering of technoscience have a degree of flexibility built into them that enables agile political responses (such as rapid soft law responses or informal moratoria) to previously unforeseen consequences.

These dynamics are nowhere more evident than in recent discussions of the development and governance of nanotechnology. This emergent technology – a ‘solution in search of a problem’,to paraphrase Lindquist, Mosher-Howe, and Liu (2010) – has garnered significant attention from both natural and social scientists and science policy makers in recent years; and – with its attendant promises of profound social impacts tied to profound scientific and technical uncertainties (Royal Society and Royal Academy of Engineering, 2004) – can in many ways be seen as an exemplary case for the Collingridge dilemma to come into play (Lee Jose, 2008). This has been widely recognised by both developers and promoters of nanotechnologies andregulators and policymakers. Developers are faced with the challenge of harnessing the enormous potential of nanotechnologies while also being attuned to potential environmental, health and safety issues (Krupp Holliday, 2005). Policymakers and regulators are called on both toregulate such that the innovative potential of the field is not restricted, whilst being sensitive to public concerns and possible emerging risks to human health and the environment (European Commission, 2004). The challenges involved in researching and developing nanotechnology in the teeth of these tensions are considerable: currently, for instance, there is a dearth of knowledge about possible eco-toxicological effects of nanomaterials (Handy et al., 2008; Thomas et al., 2009; Choi et al., 2009), industry faces commercial uncertainties regarding the development trajectories of nanotechnology in different sectors (Sutcliffe, 2008; Davies, 2009), and there is a lack of knowledge about public acceptance of nanotechnology (Kearnes and Rip 2009; Rip, 2006).

It has rapidly become clear that traditional regulatory approaches are unable to deal with this complex intersection of uncertainties, especially when twinned with fast moving technological development (Lee Jose, 2008; Kurath, 2009). Rather, the last decade has seen the rise of aset of governance arrangements which might be understood as speaking to the Collingridge dilemma by their incorporation of relatively flexible, reactive structures to guide the development of nanotechnology. (In this, of course, they can be understood as part of the wider governance turnand its “shift from hierarchical to more cooperative forms of regulation”(Kurath, 2010, p. 88)).This ‘emerging governance landscape’,as charted by Kearnes and Rip (2009), comprises a wide range of heterogeneous soft law – as opposed to hard regulatory – approaches, incorporating regulatory reviews, ELSA (ethical, legal and social aspects) research, public deliberation, and voluntary codes of conduct. Theseare, Kearnes and Rip suggest, ultimately coordinated through a widespread notion of ‘responsibility’. This discourse of ‘responsible development’ has become a prominent feature of science policy programmes and, indeed, of corporate discussion of nanotechnology. Lee and Jose (2008) similarly view the rise of ‘responsibility’ as a response to regulatory paralysis in the face of complex technological change and as a necessary, and long-term, structuring device for corporate behaviour, noting that it “it is obvious that these demands for responsible behaviour will not apply merely for an interim period, pending more formal regulation” (Lee & Jose, 2008, p. 117)

This emphasis on responsibility is by no means unique to nanotechnology – Shamir (2008) for instance, has identified a much wider move towards ‘responsibilization’ within neoliberal economies.‘Responsibility’, however, is a term that is difficult to get a firm handle on, as the values that responsibility embodies vary across situation, time and place. Two basic features of responsibility emerge from philosophy and the social sciences: these are imputability (referring to the possibility of attributing or ascribing an action to a person) and accountability (being responsible in terms of being liable to be held to account) (Pellizzoni, 2004; Bovens, 1998). These concepts are used to analyse individual responsibility; frequently, the concepts are used in the sense of political, moral or legal liability. A number of authors, however, have noted the shortcomings of contemporary concepts of responsibility with respect to the future (van de Poel, 2012; Richardson, 1999; Adam Grove, 2011) and specifically, to the introduction of modern technology. In the first instance, most of the philosophical literature tends to focus on backward-looking responsibility – for something that has occurred in the past – and often understands this in terms of reactive attitudes (van de Poel et al., 2012). Richardson (1999) discusses forward-looking responsibility with regard to the elements required to move the notion of “taking responsibility” beyond any simple notion of the duties we take on; these elements include the ability to cope with surprises when taking responsibility and the use of discretion (the pragmatic revision of rules) in taking responsibility. Adam and Groves (2011) observe that a backward looking focus means that no account is taken of the need for understanding what might constitute action that takes responsibility before the fact; this is an issue in technological societies in which the connection between actions that are currently legitimate in the eyes of the law and future harms is problematic. They offer an alternative understanding of responsibility –care– which brings the future to the forefront of our concern; care makes us aware of how:

futures interweave and alerts us to the need to handle responsibly a world that is spun from myriad relationships and commitments, attending all the time to what the things we care about need from us in order to continue to flourish (Adam and Grove 2011, 25)

This moral perspective differs from that which existed in ancient Greece and the European societies of the 17th and 18th centuries (Jonas, 1984), in which “action on nonhuman things did not constitute a sphere of authentic ethical significance” (p.4). “Ethical significance” was visible in the direct dealing of man with man; moreover, the knowledge required to assess the morality of an action was of a kind readily available to everyone. This understanding of ethical significance has inevitably changed with the introduction of modern technology, which “has opened up a whole new dimension of ethical relevance for which there is no precedent in the standards and canons of traditional ethics” (p.1).

Work on the ‘risk society’ – the “barbaric outer edge of modern technology” (Giddens, 1999, p.2) – similarly connects risk with responsibility. “Organized irresponsibility” is a key feature of Beck’s (1995) diagnosis of the risk society: late modern society allows scientists, engineers and industry to develop and introduce a variety of new technologies, while at the same time lacking the means to hold anyone accountable. Accountability is in any case not straightforward, given that technological innovation involves the work of many actors and its outcomes depend on their interactions (von Schomberg, 2007). Thus one can talk of a collective responsibility at odds with the concepts of imputability and accountability employed to talk about individual responsibility. Pellizzoni (2004) has discussed collective responsibility with respect to environmental governance. He singles out four dimensions of responsibility – care, liability, accountability and responsiveness – as a means of offering insight into the major changes that have occurred in the move from ‘governing’ to governance. The fourth dimension of responsibility,responsiveness,represents, he writes,“an encompassing yet substantially neglected dimension of responsibility” (Pellizzoni, 2004, p.557). Responsiveness refers to:

a situation where there is neither presumption of sufficient knowledge and control nor reliance on ex-post accounts and adjustment of self-established courses of action, but rather a receptive attitude to external inputs to help in deciding what to do (Pellizzoni, 2004, 557)

In the theoretical literature, then, there has been substantial debate around what ‘responsibility’ is and how it can be constituted. However, much of the policy and academic debate around the responsible development of nanotechnology has emerged with little reference to this. Thus the EC Code of Conduct for Responsible Nanosciences and Nanotechnologies Research notes onlythat a “culture of responsibility should be created in view of challenges and opportunities that may be raised in the future and that we cannot at present foresee” (European Commission, 2008, p. 7). The industry-led Responsible Nano Code similarly seeks to “provide guidance on what organisations can do to demonstrate responsible governance of this dynamic area of technology” (Responsible Nano Code, 2008, p. 3).Reflecting onthedominance of ‘responsibility’ in the governance of nanotechnology, Kearnes and Rip (2009) argue that the discourse of responsible development “operates as a meta-framing of the governance of nanotechnology, emphasising mechanisms that enable the ‘benefits’ of nanotechnology to be realised whilst innovating strategies for avoiding possible negative consequences” (p. 114). This particular framing of ‘responsibility’ thus stresses the continued development of nanotechnology in the face of multiple uncertainties aroundit (cf.Kjølberg,2010).

Despite such work mapping, to an ever higher degree of resolution, nanotechnology’s ‘governance landscape’ (Kearnes and Rip 2009),it remains unclear how such soft law initiatives are being translated into practice. While Kurath (2009) has argued that both informal regulatory activities and public deliberation processes are failing to live up to promises of ‘social robustness’, and Kjølberg and Strand (2011) show that academic scientists, at least, draw upon a number of very different models of responsibility when asked to consider the ‘responsible development’ of their work, the question remains of how industry actors – the vanguard of soft law – are responding to, translating, and managing the call to responsibility. It is this question that we treat in this paper, asking how responsible development is being configured ‘on the ground’, in the discourse and practices of commercial sectors oriented towards nanoscience and nanotechnology. This is particularly pressing given that notions of responsibility, safety and indeed socialrobustness are currently ambiguous, in the sense that there is no fixed consensus as to the meaning of the terms or on how they should be applied in real-world situations (Lindquist et al., 2010). There is even less agreement about how these terms might provide useful guides in debates about the appropriate governance of nanotechnology (Davies, Macnaghten, & Kearnes,2009).

Here, then, we start to explore how discourses of responsible development are taken up in industry, at the level of individual actors rather than the Codes and policy statements tracked by, for instance Bowman and Hodge (2008) or Kurath (2009). In so doing, we speak to debates in science policy around the effectiveness of soft law and, ultimately, how to deal with the Collingridge dilemma: we examine the ways in which the coordinating framework of ‘responsibility’ is operationalised, asking whetheritdoes indeed have the potential to steer nanotechnology in a way sensitive to societal values. We do this intwo contexts, mobilising interview, ethnographic and documentary data gathered in Europe and the United Statesin order to map some of the ways in which ‘responsibility’ is rendered meaningful in nanotechnology-oriented industry. We draw on the conceptualisations of ‘responsibility’ outlined above as a means of framing the empirical data presented. In what follows, we make a number of points. We argue that our data offers a snapshot into at least two ways of operationalising ‘responsibility’.In our European data, the concept finds expression in a responsive approach in which a response to public lack of trust and perceived public pressuresarekey features.In the US data, liability and accountability dimensions are central, often in the shape of risk management(through reference to potential environmental and health implications, for instance).While in both case studies approaches to ‘responsibility’ are linked to profit-making and business survival, we show that articulations of responsibility can differ significantly. We close by returning to the overall policy landscape of nanotechnology, drawing out the implications of these ‘on the ground’ configurations of responsibility for the broader international drive towards responsible development as a form of soft law.

Before moving on to the main body of our argument, however, we outline the research engagements we are drawing upon.

2. Methods and data

We draw on two studies, each of which sought to engage with industry actors in nanotechnology in order to explore their perspectives on responsible development. The first study was conducted as part of an EU-funded project (Deepening Ethical Engagement and Participation with Emerging Nanotechnologies, or DEEPEN; see Davies et al 2009). Dedicated interviews, documentary analysis, and participant observation in meetings were employed in order to explore howthe ‘responsible development’ of nanotechnologies is articulated by industrial actors across three key sectors in nanotechnology: nanoelectronics, materials and surfaces, and bionanotechnology.In this paper, we primarily make use of a survey – conducted by means of semi-structured interviews – carried out with companies across European nanotechnology industry between January 2008 and January 2009 in order to understand key differences between sectors with regard to how they operationalise ‘responsible development’.Out of 11 companies sampled, 7 were based in the Netherlands and Germany, 3 were based in Ireland and 1 in the UK. Differences between sectors will not be highlighted here (for such a description see Shelley-Egan, 2011); rather, we use this dataset to develop a more general overview of industry articulations of responsible development.

The second study was carried out at the Center for Nanotechnology in Society at Arizona State University (CNS-ASU), and focuses on US-based nanotechnology industry. This project – which was action-oriented in that it sought to build the Center’s private sector contacts and to coordinate its outreach to and engagement with them – was, for the period of 2010-11, a central part of the activities of SRD. For CNS-ASU, as a research center, this private sector ‘outreach’ involved seeking to understand where points of contact and synergies with the private sector (interpreted broadly as including nano industry, business, not-for-profit research, and NGOs) lay, and developing these through informal conversations, community-building activities, and more sustained partnerships. In this analysis, then, we are drawing upon what became a broad programme of ethnographic engagement with the nanotechnology private sector,usingCNS-ASUprivate sector outreach as a vehicle. Specifically, we will make use of a series of interviews carried out with private sector actors around their understanding of responsibility; these more focused explorations are combined with participant observation ofworkshops on soft law, activities and events oriented around the status of the governance of nanotechnology, and local nano industry groups and meetings, to build a picture of the ways in which responsible development is being imagined within US-based private sector communities.