Draft version – please do not circulate!

Situated knowledge, international impact:Changing publishing practices in a German automotive engineering department.

Wolfgang Kaltenbrunner

Munich Center for Technology in Society (MCTS)

Technical University of Munich

Abstract

In this paper, I analyze how the institutional requirement to increase the number of international journal publications affects research practices at an automotive engineering department in Germany. Automotive engineering is a field with traditionally rather scarce publication activity and strong connections to industry. Substantial authority to decide what should be considered appropriate technological solutions was therefore reserved for local academic elites as well as industrial partners. Recent reforms in publishing requirements change this situation. They can be seen to level the discretion of existing elites by redistributing some epistemic authority to international journal referees. However, the judgments of referees are often incongruent with the way research is organized at the level of the department. For example, the thematic interests of journals can exert a centrifugal pull in the context of an otherwise highly modular research culture, and possibilities to publish in international venues is unequally distributed across individual research projects. But while department members agree that there is a lack of fit between current practices and new expectations towards their publishing behavior, their opinions about the conclusions that should be drawn differ significantly.Some researchers argue that profound organizational changes are necessary to foster the academic rigor of engineering research.Othersbelieve that evaluation criteria should simply be adapted. Thissituationis arguably characteristic for traditionally rather localized research areas, and it implies the need for consultative processesto allow for explicit discussion among researchers and other actors about criteriaof “good research” in fields such as engineering.

Introduction

A growing body of literature has investigated how changes in the organizational and administrative underpinnings of academic work in the last decades affect the daily practices of researchers across fields. This question is highly relevant, because many of the reforms induced by policy and institutions entailsignificant changes in the organization of knowledge production, with often unforeseen effects. Analysis of changing incentive structures in scientific work have shown particularly interesting effects on the level of day-to-day research practices. For example, research evaluation systems based on productivity in academic publishing have been found to induce phenomena such as “salami-slicing” (Choi et al. 2014), academic transfer markets (Stern et al. 2016),excessive stratification of university systems (Hamann 2016), goal displacement (Butler 2003; Colwell et al. 2012), as well as a general neglect of functions of academic work that cannot be measured in terms of the evaluation criteria (Mcnay 1998). However, we are well-advised to base such analyses on appropriately detailed accounts of the features of specific fields. This is desirable not only because academic disciplines are highly heterogeneous in terms of their epistemic and social organization, and will therefore react to reforms in distinct ways (Gläser et al. 2010; Hammarfelt & de Rijcke 2015). Sensitivity to field specificities is a virtue also because it provides the basis for the necessary normative discussion about the actual benefits and disadvantages of research evaluation in specific cases. To be sure, some of the above mentioned effects are detrimental in rather obvious ways, for example when resources are wasted because high-profile researchers enter short term contracts just to boost the publication counts of their new hiring institutions (transfer markets).In other cases, whether or not the effects of evaluation are really undesirable may be not so clear. The very premise of diagnosing undesirable effects in fact always entails an implicit or explicit definition of the “ideal” epistemic and social organization of a field that is improperly captured by a given incentive system. However, arriving at such a definition is itself far from trivial.

A fruitful case through which to explore both empirical effects and the tricky normative questions surrounding research evaluation is engineering science. Previous research has focused on the areas of biomedicine, various natural sciences, and also the humanities and social sciences (Rushforth & de Rijcke 2015; Rinia et al. 2001; Nederhof 2006; vanLeeuwen 2013; van Eck et al. 2013). Engineering, by contrast, has largely been neglected, although several of its features would seem to make it a particularly interesting object of study. Engineering in fact is a field with traditionally rather scarce publication activity.This means that substantial authority to decide what should be considered relevant research problems and appropriate technological solutions was reserved for influential local actors, i.e. academics in leading institutional positionsas well as industrial partners. Recent reforms in publishing requirements change this situation. As engineers are increasingly expected to prove their research performance through publishing in peer-reviewed journals, an effect of epistemic integration is likely, i.e. a process in which epistemic decisions are partly relegated from local knowledge-producing settings to a gradually globalizing community of researchers. Beyond providing insight into the empirical consequences of this developmentfor the organization of research practices, the case study promises interesting input for the above-mentioned normative discussion about evaluation. How do practitioners of engineering research themselves think about the fit between new institutionalpublishing requirements and their current practices? What, if anything, do they suggest should be changed about the epistemic, social, and practical organization of their field, and what does this mean for science policy?

Conceptual framework

A crucial concern of science policy makersin the last decades has been the issue of research evaluation. Since the 1980s, many national science systems have introduced elaborate procedures to assess the performance of researchers on various levels of aggregation. These usually involve combinations of qualitative and quantitative elements, and are usually coupled to the distribution of both symbolic and financial resources (cf. Hicks 2012).The rationales of research evaluation are quite diverse. Evaluation is inter alia meant to provide mechanisms for selective resource allocation, tosatisfy the need for accountability in public spending, as well as to incentivize researchers to constantly improve their performance (Stern et al. 2016; Hamann 2016).An important criterion in most evaluation systems is the ability to publish articles in reputed international journals. Reputation here is often equated with indexation in the Web of Science or Scopus, or some sort of country-specific journal ranking.An influential line of critique has suggested that such standardized evaluation systems are not compatible with the specificities of a substantial range of disciplines (van Leeuwen 2013; Nederhof 2006; Hicks 2006). In many cases, this argument seems rather unproblematic, for example when the amount of peer-reviewed papers a researcher has published is used as a quality indicator in areas that primarily operate with scholarly monographs. At the same time, some findings from recent ethnographic and interview-based research on the lived experiences of researchersmake the implications of evaluation more difficult to interpret.In a case study on practices of quantifying publication output in Dutch law faculties, Kaltenbrunner & de Rijcke (2016) for example note very different normative preferences among legal scholars. Although large parts of Dutch legal scholarship isgrounded in a domestic scholarly communication system and discipline-specific publication genres, many scholars actually advocate adopting the international, journal-centricpublishing culture of the natural sciences. Rushforth & de Rijcke (2015) in turn have problematized scientometricians’ longstanding critique of the technical shortcomings ofthe Journal Impact Factor and the Hirsch Index, as well as their usein the evaluation of research performance. Rushforth & de Rijcke argue that although such critical reflection is desirable, the argument that indicators are “misused” is problematic when clearly those indicators are thoroughly embedded in the daily research practices of biomedical scientists. Similarly, Gläser(2016) has critically engaged with the central recommendations of the San Francisco Declaration on Research Assessment (DORA). Originating from a deliberative discussion of publishers, journal editors, and scientists in the area of cell biology, the declaration recommends not to use the journal impact factor as a proxy for the relative quality of an outlet, and by implication, to avoid applying it as a metric for research evaluation on the level of the individual. However, drawing on interviews with Australian life scientists, Gläser concludesthat indicators are and will continue to be used for exactly these purposes, simply because the indicator is a powerful means to reduce complexity and delegate epistemic judgements for various routine purposes.The latter studies thus point to an interesting problem, namely the inevitable normativity of ideas of proper research incritical discussions about research evaluation. The argument that indicators somehow have detrimental effects is only possible if one operates with a certain baseline of an “ideal” organization of a field. But what implicit or explicit assumptions do we make when we argue to have identified such a baseline?

In this paper, I provide a detailed analysis of changing publishing practices related to new evaluation criteria in a department of automotive engineering at a major German university. In so doing, I am intentionally hesitant to make assumptions about the desirability of particular evaluation criteria. Rather, I try to empirically observe implications for the organization of research, as well as the normative conflicts among research practitioners to which they give rise. An important methodological device here is the concerns that changes in research evaluation in automotive engineering raise in the perception of its practitioners. The issue of research evaluation and publishing in fact are moving to the spotlight of attention not only of myself as an analyst, but also of the actors I study. Although there is no nation-wide evaluation exercise in Germany in a strict sense, there is the so-called Exzellenz Initiativeand more generally the sudden advent of university rankings which arguably affect how individual German researchers and administrative actors think about publishing practices (Hornbostel et al. 2008). In the particular university at hand, the rector has personally promoted a turn towards international publications also for engineers, i.e. they should make a determined effort to target journals indexed in Scopus or the Web of Science.Also, the university library offers course that are meant to make researchers aware of strategies for improving their visibility and ‘impact’. A related organizational reform is the introduction of a so-called cumulative PhD thesis, i.e. a doctoral thesis in the shape of a collection of 3-4 articles instead of a monograph.

Publications fulfill complex functions in academic research practices, not just that of communicating knowledge. These functions are actually only partially theorized and empirically studied (Cronin 2003; Hessels et al. 2009; Wouters 1999). Latour & Woolgar (1979) have argued that scholarly communication can be conceptualizedas acredibility cycle, i.e. a system in which scientists seek to publish articles that are deemed valuable by peers. Increasing reputation allows scientists to advance in the academic career system, which in turn conferscontrol over resources that can be invested in instruments, scientific labor etc. This perspectivealso implies that credibility is instrumental in the distribution of epistemic authority, i.e. the authority to decide what counts as an acceptable solution to a given research problem. Here it is important to keep in mind that different academic fields are characterized by unequal degrees of epistemic and social coordination of research activities (Whitley 2000; Becher & Trowler 2001). Engineering is characterized by an epistemically rather localized structure. Research topics as well as conceptual frameworks are relatively weakly standardized across the international community of researchers, in contrast to the strong consensus on theoretical frameworks and relevant research questions in fields such as high-energy physics (Knorr-Cetina 1999; Galison 1997; Traweek 1988).Moreover, research is very much dependent on local lab infrastructures which in turn cannot easily be run by just any new researcher. This infrastructure needs to be continuously maintained in order for researchers to create technologies and research findings. Individual undertakings therefore have to be epistemically and practically coordinated with all the other activities in the department, and as I will show, this has also implications forthe embedding of publishing activities in individual research practices.The distribution of epistemic authority is also shaped byspecific relationshipsbetween automotive engineers and industrial partners. Car manufacturers in particular are a powerful agent in framing research problems for German engineers.

Research in the sociology of science has suggested that recent policy-induced reforms can usefully be analyzed in terms of changing authority relations (Whitley et al. 2010) between researchers, institutions, and science administrators. An advantage of this perspective is that it replaces the more diffuse notion of governance with sharply defined analytical focus on empirically tangible changes in the organization of science, and that it avoids an overemphasis on the intended effects of particular governance instruments (Gläser 2010; Dahler-Larsen 2014). In principle, the tensions generated by the change in publishing practices in engineering can be described as an (incomplete and contested) shift in how the authority to define acceptable technological solutions is distributed. In internationally integrated areas of study, producers and reviewers of article manuscripts proposed for publications in journals can be reasonably expected to operate within a similar socio-technical research infrastructure. I.e., they work with material instruments, methods and research questions that have a high degree of epistemic and practical overlap. To be sure, there will be differences in the size and resources of different departments or labs.But the basic conditions will be similar enough for the judgment of reviewers of manuscripts to meaningfully relate to the work of the submitting researchers. This is different in the case of the locally oriented research culture of automotive engineering, in which publications in international peer-reviewed journals are a relative novelty. While researchers have always published to a certain extent, such publications were often in professional journals at the intersection of academic and industrial audiences, or in the shape of conference proceedings. The move to international publishing constitutes quite a significant change. For one, the authority to define acceptable contributions to knowledge are transferred at least partly to the editorial boards of international peer-reviewed journals, and thus away from the local epistemic authorities in German engineering. Because of the relatively weak degree of epistemic integration within engineering, however, the expectations of journal reviewers are not necessarily informed by a good understanding of how local research infrastructures at individual institutions is organized. The judgment of referees as to what counts as a suitable technological problem and solution therefore may diverge significantly from that of the submitting researchers. A similar challenge arises from particular practices of interaction between academiaand industry. Engineers at the department at hand entertain close relations to their partners in the German car industry. These are based on mutual dependencies for various kinds of resources, i.e. funding, in-kind support, data, as well as the labor of engineering graduates. Again, these relations are locally specific, and they do not necessarily map onto the practical constrains and views about “proper” industry relations on the part of journal reviewers.

As I will show, the partial shift of epistemic authority that comes with the move to publishing in international journals gives rise to fundamental normative discussions among department members about what kind of knowledge automotive engineering should strive to produce. An important aspect to note here is that a community of researchers is not necessarily characterized by broad consensus about epistemic and normative goals of its individual members. The community may actually derive its relative historical stability not so much from broadly shared commitments of individual researchers, but from specific institutionalized interactions that may also be hierarchical and conflictualin nature (Galison 1997; Hackett 1990; see also Lave & Wenger 1991). As the following analysis will show for the case of German automotive engineering, changes in publishing practices have created a real possibility that the hierarchical structures of this particular field will undergo a process of reconfiguration in the near future.

Methods

This paper is based on a set of 16 semi-structured interviews with members of a department of automotive engineering at a major German university.The selection of informants covers all hierarchical levels of the department: The department chair (the only full professor), two tenured senior lecturers, nine PhD candidates (five of whom research group leaders), as well as four MA students involved in departmental research projects. This sample is representative of the organizational structure of many German engineering departments. In contrast to most natural sciences, the main chunk of research is conducted by PhD students, who also function as research group leaders (groups usually consist of 6-8 other PhD students plus an equal number of student assistants). After completing their doctorates, the overwhelming majority of PhD students take up industry jobs. Purely academic careers are extremely uncommon. Traditionally, professors were recruited from leading managerial positions from industry.

Interview guides were designed to generate information about personal career outlooks, the substantive content of individual research projects, as well asthe organization of research at the level of the department.The latter set of questions put a particular emphasis on research evaluation and the increasing role of peer-reviewed articles in engineering science. Interviews lasted between 50 and 150 minutes and were transcribed in full. In addition to the interviews, my data collection was enriched by an extensive guided site-visit of the workshop spaces, as well as a selective reading of published research by department members to contextualize the accounts given in the interviews.

Data analysis broadly followed the inductive approach of grounded theory (Chamaz 2006).Through iterative reading of both transcripts and scholarly literature, I have tried to order my informants’ views on changing publication practices according to shared themes.This has resulted in three sensitizing concepts that relate the issue of publishing to key aspects of knowledge production at the department (Bowen 2006): The reliance on a shared research infrastructure, the way particular epistemic choices in research are taken, as well as the relationship between the department and its industrial partners.

Empirical analysis