Math 89S Mathematics of the Universe

Feng | 1

Duke University

Null and Void

Implications of Publication Bias

Matthew Feng

Math 89S – Mathematics of the Universe

Prof. Hubert Bray

15 Feb 16

Introduction

While the media is a powerful instrument to communicate information to mass amounts of people at the click of a mouse, it can be equally powerful at misinforming the public, whether deliberate or unintentional. Indeed, the media has had a role in perpetuating crises ranging from tobacco usage to climate change (Oreskes & Conway 2010). Misdirection can be used to prolong problematic behavior via denial, but providing wrongful information can also lead researchers and the general public astray with false positives. Generally, this phenomenon is known as publication bias, which is when “the research that appears in the published literature is systemically unrepresentative of the population of completed studies” (Rothstein, et al. 2005). Publication bias first received attention in 1959 when Theodore Sterling found that 97% of articles across four journals reported statistical significance, which implied that there was a disproportionately small amount of attention paid to studies that lacked statistical significance (Sterling 1959). In what is called the “file-drawer problem,” scientific studies with negative results often remain unpublished, akin to being thrown into a drawer and forgotten. Thus, science can be falsely led on by claims of new discoveries when in reality theories are unproven and the prevailing truth is one of uncertainty and doubt. This paper will explore both scenarios of media distortion of information, denial and false positives, both within and without the field of astronomy. Specifically, this paper will begin with underlying reasons for the disconnect of information between what scientists and the public know before moving on to discuss both the implications of and solutions to the problem.

Causes

Publication bias can arise for multiple reasons, but one primary cause is that researchers often want to report positive results. In this context, positive does not necessarily mean good for humanity but instead a phenomenon that has some underlying theory or explanation, which is contrasted to ‘negative results,’ or the lack of an explanation. Therefore the converse is also true, that readers and researchers do not have as much of an interest in hearing that conceptsremain unproven. As Kevin Mullane and Michael Williams (2013) explain, “Researchers are an inherently optimistic group – the ‘glass half full’ is more likely brimming with champagne than tap water.”

However, it is insufficient to just say that researchers are optimistic or seek positive results; instead, it is important to also understand why that is the case. The main reason that there is a predisposition to publish only positive results on the supply-side, or the researcher-side, is because of competition between scientists. In other words, because of limited funding from institutions, there is a pressure to be the ‘first to publish,’ which is generalized by the term ‘publish or perish’ (Mullane & Williams). Another aspect, besides just survival in the labs, is that breaking the ice on a theory can bring about career advancement and also recognition within the community. Taken together, these are very compelling incentives for researchers to aim for positive or groundbreaking results rather than just corroborating the certainties or doubts that are found within the scientific community.

Because research is a two-way street between what data can be supplied via science and what is demanded by institutions and the publishers, it is important to understand incentives from the publisher’s perspective as well.For publishers, an obvious motive is the bottom line, or profits, and so if the general public has a disinterest for negative or null results, then researchers will respond to demand-side preference with a greater proportion of supply of positive results, skewing the actual literature on any subject. Furthermore, the rise of the Internet age allows almost anyone to publish because the costs of doing so are significantly lower than at any time before (Mitra 2001). And so while the common person cannot necessarily publish his/her own work without some kind of scientific authority, news can be read from anywhere on any site, meaning that it is important to have material and results that are interesting enough for anyone on the Internet to want to republish, thus again reinforcing the idea of demand-side preferences for interesting headlines and results.

“Market factors” aside, it is important to also consider the influence of external actors, who often hold a vested interest in the policy outcomes derived from publications. For example, the fossil fuel industry’s massive lobbying power in Washington D.C. extended far beyond just being in the pockets of legislators; instead, they also had real influence in laboratories. With undisclosed funding, corporate powers were able to influence scientists to publish anti-climate change studies which denied the existence of any real impact of emissions and greenhouse gases on the environment at large (CIC 2015).

Implications

But beyond just influencing what kind of research scientists do, the nature of publication also shapes how they do it – for instance, a scientist is both compelled to publish positive results but also to publish quickly. This is an implied part of being the ‘first to publish,’ but the consequences are also grave, as such an “urge and rush to be first to publish a new “high-profile” finding can result in “sloppy science” (Mullane & Williams). The most alarming figure of all does not even have to do with the number of published articles that yield statistically significant findings or the reasons why publications are compelled to be positive; instead, a key takeaway is because of such sloppy, overly optimistic science, about 70% of follow-up studies contradict the originally published observation (Mullane & Williams).

As previously discussed, the implications within the Internet age of publication bias seem to be strongly enhancing the incentives. This seems to be the case, as “from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders)” (Lam 2015). While not fully causal at first glance, we can also see that “67.4 percent [of reviewed retractions] resulted from misconduct” (Lam). This is concerning because it highlights the magnitude of the bias as well as the explicit and causal relationship to the authors. Thus, publication bias seems to be statistically confirmed, particularly because most error is at the hands of deliberate negligence, not carelessness. But while we could put the stats together and analyze them ourselves, scientists also admit that it all comes down to a “increased competition for academic jobs and research funding, combined with a “publish or perish” culture” (Lam). Particularly problematic is that the field is self-aware that such issues exist while doing nothing to create mechanistic checks on such behavior. Thus, overall trends indicate that among all articles, “from 1990 to 2007, the proportion of positive results grew by 22 percent” (Lam).

Across different fields of science, there has been a deleterious effect on the quest to find the truth and inform the masses. For example, the issue of climate change was mentioned in the previous example, but it also has had spillover implications into research of the universe. With great bureaucracy often comes mind-numbing and questionable decisions, such as the choice to appoint Ted Cruz, who is openly opposed to science and climate change research, to the position of chairman of the Subcommittee on Space, Science, and Competitiveness (Plait 2015). As doubt grows in one area of science, it has clear implications for the rest of it as well because of how interconnected the discipline is.

Directly in astrophysics, the implications are enormous as well. The most notable and recent example involves scientists at Harvard University who announced the discovery of gravitational waves. Unfortunately, after publishing their results for peers to see, the data was deemed to have been confused for interstellar dust (Economist 2014). Logically, the haste to publish data that may have earth-shattering implications for the rest of the field must be due to publication bias because the personal gains are far too large. However, a caveat to note regarding this study’s results, as with all instances of publication bias, is that hasty findings that are retracted do not equate to a lack of such a conclusion. Indeed, while the Harvard scientists were incorrectly interpreting their findings, gravitational waves did actually turn out to exist, as recent discoveries would indicate (Overbye 2016).

Lower-profile incidents within astronomy are fairly common as well. For one, researchers at Stanford first reported a massive explosion that was first attributed to a star was instead, as overturned just a few hours later, something that “has been active sporadically over the whole mission span” (RetractionWatch 2014). Another one involves an asteroid, initially classified as a near Earth object but later deemed to be a “routine main-belt object” (RetractionWatch 2015). While relatively inconsequential, it should be stressed that not every report like these can be retracted without widespread implications; rather, the rush to publish findings, even preliminary ones, can often sway public opinion or create broad public reactions. Negligence on the part of scientists can alter data further down the road for others, creating a domino effect on research that cites previous findings.

As with the previous section, we will move from problematic research published by scientists to isproblematic science demanded by the media. Indeed, the kind of studies that the media demand often times, by one paper’s measure, can be a net 23% less reliable compared to less newsworthy papers that are instead published in scientific journals (Siegfried 2014). Indeed, the bottom-line incentives of the media give way to bogus science and poor research. For example, the proposed Mars One project, which involves colonizing the planet, is largely driven by a discussed reality TV show that would follow the astronauts; however, such a procedural necessity like objectivity would be undermined because of a TV show’s desire for higher ratings and a larger audience, which would intuitively lead to an artificial show without real scientific findings (Listner & Newman 2015). As such, commercial incentives to wield science as a tool for entertainment instead of a tool for public good and innovation can spur abnormal, bizarre, and counterproductive investments. Thus, the scope of publication bias falls far beyond just inherent incentives for scientists to publish negligently, but the bias toward interesting results also skews research in favor of high-demand publications instead of high-impact.

Solutions

The clearest solutions for publication bias are quite evident – greater requirements for the publisher and scientist will ensure that stringent scrutiny can weed out poorer studies. Indeed, many advocacies propose that investigators must be able to summarize data in order to indicate validity of the trials as well as sensibility of the findings (DeMaria 2004). Such attention to detail may not root out all the problems, but it can help with the most egregious. Additionally, proposals for a universal registry, where all studies are placed into public domain, is said to pose benefits for issues such as exclusion (DeMaria). Specifically, when studies are not published or stored, meta-analyses on the subject can yield very skewed results because there are only positive results rather than a balanced or representative body of evidence and literature. With a public bank, it becomes easy to locate articles that create a more representative set of data. While again it does not address the issues with scientists who can be bought out, it does help contextualize the bigger picture within fields.

Ultimately, however, it is of the utmost importance that the field of science be disconnected from page views of the public – sensationalism no longer can be the determinant for what research is done and what is not. The field is science is far too important for the day-to-day procedures to be compromised by individuals who are trying to succeed in their profession. Instead, the value in science should return to its roots of truth-seeking because it both decreases the incentive to publish poor data and also reduces a desire for positive results, and only in this way can publication bias truly be rooted out.

Bibliography

Anthony DeMaria, “Publication Bias and Journals as Policemen,” Journal of the American College of Cardiology, Vol. 44, No. 8, 2004, Web.

Bourree Lam, “A Scientific Look at Bad Science,” The Atlantic, Sept. 2015, Web.

Climate Investigations Center, “Willie Soon and Conflicted Climate Science: Science Journals Unwittingly Serve As A Conduit For Corporate Interests,” June 2015, Web.

Dennis Overbye, “Gravitational Waves Detected, Confirming Einstein’s Theory,” New York Times, Feb. 11, 2016, Web.

Economist, “Let the light shine in,” Economist Magazine, Jun. 14, 2014, Web.

Hannah Rothstein, et al., “Publication Bias in Meta-Analysis,” 2005.Web.

Kevin Mullane and Michael Williams, “Bias in research: the rule rather than the exception?” Elsevier Journal, 2013, Web.

Michael Listner and Christopher Newman, “Failure to launch: the technical, ethical, and legal caseagainst Mars One,” The Space Review, Mar. 16, 2015, Web.

Naomi Oreskes and Erik Conway, Merchants of Doubt, 2010. Print.

Phil Plait, “Yup, a Climate Change Denier Will Oversee NASA. What Could Possibly Go Wrong?” Slate Magazine, Jan. 13, 2015, Web.

RetractionWatch, “Harvard-Smithsonian space center retracts ruling on asteroid,” Feb. 20, 2015, Web.

RetractionWatch, “Twinkle, twinkle little star, how I wonder where you went: Astronomy report retracted,” Jun. 17, 2014, Web.

Steve Mitra, “The Death of Media Regulation in the Age of the Internet,” Legislation and Public Policy, 2001, Web.

Theodore Sterling, “Publication Decisions and Their Possible Effects on Inferences Drawn from Tests of Significance--Or Vice Versa,” Journal of the American Statistical Association, 1959, Web.