Predictions and Forecasts in Policy Making

Pressing Issues: The History of Technology meets Public Policy

7-9 October,

2013 (Colby College)

Ann Johnson

Department of History

University of South Carolina

Email:

In the United States the National Nanotechnology Initiative (hereafter, NNI) was launched with a 20 year vision. The ‘first strategic plan’from 2000 to 2005 would be focused on the development of passive nanostructures, after 2005 more resources would be devoted to active nanostructures, by 2010 S&T policy would be oriented toward active nanosystems, while the end stage would be the development of molecular-scale manufactured or self-assembled nanosystems (Roco 2007:8; Roco 2006). The following is Mike Roco’s heavily-used slide about these generational transitions.

The creation of these developmental benchmarks as a part of the statement of S&T policy was certainly not new or unique, but it seems to have set a precedent in nano, where such benchmarking has become a norm in organizing the development of scientific and technological research. Documents have since proliferated that suggest timetables and routes for the development of nanotechnologies of all stripes—from Roco’s stump-speeches about the on-going progress of the NNI to the National Institutes of Health’s Roadmap for Nanomedicine to the Chemical Industry’s Vision 2020 Roadmap for Nanostructured Materials. These types of documents have also become common in other areas of S&T policy, and their increasing domination of nano policy seems to be part of a broader trend in S&T policy (Schaller and Kostoff, 2001; Kostoff, 1997; Fleischer, Decker and Fiedeler, 2004).

These documents appear generally under two different guises: as foresighting documents, largely in a European context, and as Roadmaps, which can be produced by either the private sector or by governmentagencies.[1] In this paper the focus will be on roadmaps, but this should not be taken to imply an argument for strong distinction between the two types of policies. In fact, the U.S. chemical industry’s roadmap for nanostructured materials, which is the case study I will discuss later in this paper, has many elements of a foresighting document. Thus, it is worth noting that there is a tremendous diversity of different kinds of purposes and formats for roadmaps. On the other hand, there are common elements to all roadmaps. Ronald Kostoff and Robert Schaller define a roadmaps as a “layout of paths or routes that exist (or could exist) in some particular geographic space” (Kostoff & Schaller 2001:132). They emphasize that roadmaps are social mechanisms as well as a learning experience and communication tool. As a result roadmaps are not only outcomes or documents, but also processes, and perhaps their role as processes are more important than the actual paths they end up charting for technological development. Robert Galvin, who in 1998 wrote an editorial for Science on the need for roadmaps in scientific research, claimed that “Roadmaps communicate visions, attract resources from business and government, stimulate investigations, and monitor progress. They become an inventory of possibilities for a particular field, thus stimulating earlier more targeted investigations” (Galvin 1998:803). Others emphasize the role they play as “Decision aids for strategic building and planning” (Fleischer, Decker & Fiedeler 2004:128; Garcia & Bray, 1998). Following the innovation theory literature, Kostoff and Schaller also distinguish between “technology-pushroadmaps,” which start with existing technologies and create roadmaps to show the diversity of capabilities to which new research could lead, and “requirement-pull,” which start with desired products and work backwards in time to pull in necessary R&D to arrive at these products (Kostoff & Schaller 2001: 136). Roadmaps also come in prospective and retrospective varieties, though I will talk exclusively here about prospective roadmaps.[2]

A literature about the processes of developing roadmaps and their effects is just beginning to develop, and little of it deals specifically with the case of nanotechnology (Schaller 2004; Kappel 1998). However, one thing is clear, exemplified by the NNI benchmarks; the predictive dimension of these documents is problematic. Roadmaps have many useful effects on technology development, but their accuracy in predicting or perhaps even in charting the accurate course of scientific research and technological development is spotty. Thus we must understand the roadmap in terms other than a straightforward strategic planning document, especially when it is applied to emergingtechnologies, where a clear path of continued innovation is unclear. In this sense, the confidence that some policymakers seem to invest in roadmaps is likely misplaced, and drawn from the successes of particular roadmaps where the technological development bears little resemblance to the umbrella category of nanotechnology. The most obvious example of an attractive forerunner of the nano-roadmap is the International Technology Roadmap for Semiconductors (hereafter, ITRS), which is a particular sense does deal with nanoscale technologies. However, the ITRS has been created with the goal of maintaining the pace of miniaturization characterized as “Moore’s Law.” This is a different kind of project than planning the research benchmarks needed to bring something as unpredictable as molecular nanosystems into existence. As a result, in examining the work roadmaps do in technological development it is useful to keep in mind the fundamental differences between innovating in an already existing technology, like semiconductors, and shepherding a new technology into being. The levels of uncertainty are obviously much higher, and strategic planning is more likely to go astray in ways that will decouple the incentives that roadmaps create to shape the route technological developments take from some logic of the stepwise technological knowledge expansion and extension. As a result, one of the real obstacles in roadmapping is increasing chance for the paths of actual and planned technological development to diverse, rendering either the roadmap inaccurate and pointless or the technological development distorted. This is particularly true for long-term roadmaps, where corrections are clearly necessary every few years, but most of which fail to prepare a process for revisiting the map and generating revisions to the schema.

  1. The High Modernist Ideology of S&T Roadmaps

So, if roadmaps are questionable as efforts to divine the future, then they must play other roles in technological development and have other attractive qualities to policymakers. My argument here is that roadmaps are an extension of a modernist, rationalist streak in policymaking more generally. As such they draw on several elements of James Scott’s understanding of what the state wants from its planning policies (Scott, 1999). I think it is useful to review Scott’s understanding of “High Modernist Schemes” as they apply to the visions espoused in many different kinds of roadmaps, whether such roadmaps are actually instruments of the state or of like corporate conglomerates, like the ITRS or Chemical Industry’s Nanomaterials by Design Vision 2020 Roadmap. Scott claims that High Modernist ideologies are based on the deep assumption that progress can and will improve the world and that shepherding such progress into the world requires management from the top to bring people, institutions and even nature into state folds. This ideology underlies much state planning in both democratic and authoritarian states, but according to Scott catastrophic results come from the coupling of the high modernist ideology with authoritarian states which squeeze out the spaces for dialogue and resistance. In the case of nanotechnology policy in the, largely Western, developed world (but including Japan and South Korea), the abuse of high modernist ideology by authoritarian states is not a relevant critique. However, there are several common elements in Scott’s analysis and in roadmapping and I believe these dimensions show some of the potential shortcomings of roadmaps.

James Scott points out a number of qualities of policies that High Modernist ideologies privilege. These include, centrally imposed, top-down linearity and control; the devaluing of mētis or practical, local knowledge; and the simplification of real problems to provide legibility of systems to the state. All three of these conditions are relevant to the planning of science and technology. Roadmaps, in many cases, do try to impose linearity (and causal chains) on the production of knowledge. While a number of scholars have been arguing that science has begun to take on a new ethos, wherein applications drive, shape and determine priorities in more basic research, rather than the reverse, it is also noteworthy that this new kind of science is still explained in a basically linear model.[3] Yet any legitimate, contextualized, historical study of the development of almost any scientific knowledge shows its unpredictable and non-linear path. Thus through the state and scientists alike choose to impose linearity on their understanding of scientific development.

The second quality of the High Modernist Ideology is not so cooperatively accepted by practitioners and the state. Scott argues that in order to make various processes manageable by the state, the state imposes types of knowledge that are lend themselves to legibility and linearity, and control by the state. This means that the state prefers formal, universal knowledge forms (Scott calls this form episteme) over local, tacit, and experiential knowledge (Scott’s mētis), even when the latter is shown to be highly effective. Simply put, states, acting through policymakers, prefer modular, transferable, technologically rigorous knowledge. However, science studies scholarship over the past two generations has shown the critical role of what Scott calls mētis in the production of knowledge and new technologies. Mētis is a knowledge type that can be tacit or physically embodied or simply non-explicit. It is experiential and usually locally produced and used. Roadmaps, like the state, privilege episteme and are usually explicit about how it should be developed (and about what particular new knowledge is critical to produce). However, this creates problems with roadmaps since it clearly shows a disconnect between scientific knowledge as the roadmap models it and scientific knowledge and know-how as research into actual science shows it to be. This can cause all sorts of distortions in the type of knowledge production that S&T roadmaps incentivize—meaning that if roadmaps attract resources, as Galvin claims, then those resources may be directed at generating knowledge the that roadmap privileges and not the kinds of knowledge that are genuinely needed to advance science and technology. Here there is a conflict between scientists who may have particular notions of what kinds of knowledge generation should be prioritized and what the roadmaps highlight as critical. One example of this has been the meager emphasis in nano since the NNI on metrology. NIST gets only a small percentage of the over $1 billion allocation to nano by the US federal government, but many scientists believe that progress on their projects is dependent on under-funded production of better knowledge and instruments that measure entities and phenomena at the nanoscale. But even though metrology could be see as episteme as it leads to standardization, in the case of nano development it is instrumental (excuse the pun) rather than the aim of roadmapping per se, so it falls back into the category of scientific practice (which obviously has both tacit and explicit dimensions), which roadmaps obscure more than they explain.

The third quality of Scott’s critique of state policies that articulates with nano policy is the state’s preference, perhaps even demand for, legibility at the top levels. This is a phenomena clearly driving S&T policy with its renewed emphasis on metrics and assessment plans.[4] Science which can be rendered legible to the state is easier to measure and its outcomes easier to assess. However, legibility has a cost, and that price is paid in the simplification of the science in order to render it more legible. Complexity obscures legibility and therefore policymakers and roadmappers end up simplifying the process of making scientific and technological knowledge to the point of losing the fit between model and reality.

Taken together Scott’s High Modernist ideology provides a glimpse of the possible problems and shortcoming of roadmapping as S&T policy. All these qualities are interwoven in Scott’s analysis—clearly linearity and legibility are mutually reinforcing characteristics, while mētis enhances both as well. At the core of a Scottist analysis of roadmapping in nano is a critique of the centralized, top-down mentality that roadmaps, as well as foresighting and forecasting documents and processes, advance. Like any social actitivity, science is rarely linear, illegible at many levels, and absolutely dependent on local, informal know-how. Modeling science as linear, legible and based on episteme fails to do justice to a true description of the production of science and technology. Thus, in addition to discarding the notion of roadmaps as accurate predictors of the future, we must also remain skeptical of their strategic planning capacities, since they rest on a mismatch between science as policymakers would like it to be and science as it really is.

Thus the question remains then: Given the current popularity of roadmaps as a form of S&T policy, what work do roadmaps actually do in the development of science and technology? To answer this, two efforts are needed. The first should examine the history of roadmapping and its precursors in technological forecasting to understand how we got to a point in time where roadmaps seem to wield so much power. A second research program needs to look at actual roadmaps and see their operational attributes. These two efforts will help answer the “why roadmaps?” question, since the obvious answers (i.e., that roadmaps provide accurate prediction or that they facilitate successful strategic planning) are more problematic than they initially appear.

II. Toward a History of Roadmapping: Connections to Technological Forecasting

Despite some recent claims that foresighting and roadmapping are radically new efforts to manage the direction of innovation through targeted scientific research, clearly there is over a century of work on the state’s direct intervention in science, technology and industry.[5] Some would also tie the development of the modern nation-state itself to its increasing capacity to generate and use knowledge—a process which clearly dates back to the 17th century, if not earlier.[6] Still continuity is not the same thing as a static relationship and it seems that there are a number of paradigms of the states involvement in the planning and prioritization of scientific research. It is not the purpose of this paper to trace fully the history of state involvement in scientific planning—that will fill several volumes. But certain modes can be quickly identified.

Both World Wars generated ‘command economy’ models of state involvement in scientific research, in different forms in several participating nations. During World War I, Britain’s chemical industry collapsed after the expulsion of German firms and the Department of Scientific and Industrial Research (DSIR) was created to plan and support the reconstruction of a native, British chemical industry (as well as industrial chemistry and chemical engineering as academic disciplines producing the scientists and engineers needed for the industrial effort). In addition, the DSIR’s mission included supporting other strategically crucial research in an effort to prevent further technological and industrial inadequacies. The most obvious example of the state’s support and directional control of scientific research during either WWI or WWII was the Manhattan Project, though value and lessons from this particular example are much debated. Clearly a scientific success—after all, the project did produce its desired technology through the application of targeted physics, chemistry and engineering research--the Manhattan Project is much more complicated to assess as a planning project. However, its original financial allocation was wildly inadequate ($100 Million vs. an actual cost of $2 billion) and it also took longer than planned.[7] Still, after the war it stood as a shining example of the applicability of science to technological development and lay behind the logic of the so-called linear model of Vannevar Bush’s Science, the Endless Frontier and his fight to establish the U.S. National Science Foundation (Hart 1998).

During the Cold War both Eastern and Western states strove to direct scientific and technological research in ways that would maintain or extend the advantages states felt they had or needed over their rivals (Rocca 1981). Technological Forecasting, as a policy tool for that management of research deemed essential to strategic needs, emerged in the 1960s—an outgrowth of non-structural economic modeling and operations research. There are several factors which played causal roles in the emergence and positive reception of technological forecasting. Not only did it fit with the strengths of the Bush linear model of science in its heyday, whereby investment in key basic sciences led to socially, economically, and—not least—militarily important technologies, but technological forecasting fit with a kind of social engineering ethos common in both western and eastern policy circles. This was a dominant period for Scott’s High Modernist Ideologies, and a number of a social and technological planning disasters Scott details occurred during this time. But technological forecasting had multiple dimensions and several different styles to it.