Wither pre-publication peer review to reinvent scientific publishing

KamounLab
6 min readAug 31, 2021

--

Adapted from this 2018 blog post.

Preprints — open versions of scientific papers that typically precede formal publication in scientific journals — have emerged as an important source of scientific information. How do preprints differ from standard scientific publications? The primary difference is that classic scientific papers normally go through the process of pre-publication peer-review. This is thought to give the publication a stamp of approval — experts have rigorously checked and acknowledged the quality of the science and the conclusions drawn from the work. Other scientists can now read the work with a degree of confidence knowing that it has been carefully vetted by their peers.

That’s at least the theory. In practice, the quality of pre-publication peer-review is highly variable and the system can be gamed. It is common that deeply flawed papers get published in all types of journals, from minor outlets to the famous glam-mags. Although the reasons are complex, peer-review is known to be slow, inconsistent, biased and subject to abuse. In addition, the review process generally lacks transparency leading to frustration among authors and readers alike. As a consequence, scientists often approach the peer-reviewed literature with a great degree of skepticism. With the emergence of internet commentary and social media, post-publication evaluation has become routine. Post-publication peer-review is now an important factor in the scientific debate. Scientists are openly commenting and criticizing published papers using a variety of online platforms. It is now fairly common that erroneous research papers published in peer-reviewed journals get corrected or retracted following online criticism. The evaluation of the scientific literature doesn’t end with publication in peer-reviewed journals. The life of the paper extends beyond its acceptance date.

What’s flawed in the standard publication system is its reliance on pre-publication peer-review to filter publications and weigh on the rather subjective decision of what gets published where. This has created corrosive incentives for scientists. The focus is not on producing high-quality reproducible science that stands the test of time but to convince, and may be even fool, reviewers and editors. Authors have developed several stratagems to address the somewhat arbitrary nature of pre-publication peer-review. One example is serial submissions. Authors keep submitting a manuscript until they stumble on sympathetic reviewers and editors. This process can take years. Typically, the paper’s conclusions don’t change that much over the multiple versions. Some authors keep submitting their work even when reviewers highlight important flaws. Ultimately, some journal, possibly a reputable one, ends up publishing the manuscript. However, once the paper is out, the scientific community is largely unaware of its protracted history. The combined effort of the anonymous reviewers — a costly affair — remains buried in the journal’s archive and out of reach from the scientific community. All the pro bono time that the referees have devoted to the review process ends up serving little to advance science. I wish an economist would calculate the overall cost of this sterile exercise. [It turned out somebody precisely did that. Thanks @jessicapolka​ for the link.]

Pre-publication peer-review can of course improve the quality of the science but the fact is bad science eventually filters through thus defeating the purpose of the entire exercise. Flawed and irreproducible papers frequently get published in all types of journals. The authors of such papers often hide behind pre-publication peer-review to defend their work. “But it’s in [insert name of journal here]!”, they would claim. Ironically, the same authors tend to be dismissive of post-publication peer-review as if arguments raised through that process somehow carry less weight than the closed-door deliberations of an editor and a couple of referees. Indeed, post-publication review suffers from the apathy of authors and editors–a hesitance to fix the record. A truly shocking stance given that snubbing criticism goes counter to our fundamental ethos as scientists and would ultimately erode society’s trust in the scientific community. Calls to class the failure to address errors as research misconduct aim at creating incentives for self-correction. The responsibility to fix the record, even after formal publication in peer-reviewed journals, rests primarily with the authors.

The true stamp of approval in science is reproducibility. Can the work be repeated? Are other scientists building on it to advance their own research? If the study stands the test of time then it would have gone through a much more rigorous and stringent vetting than pre-publication peer-review by two or three referees. This is where preprints come in play. Do we need to devote so much time and money to the tedious and expensive process of pre-publication peer-review when there are painless and free platforms to disseminate scientific knowledge and accelerate the capacity of the scientific community to build on the published work? Is the significant material and human cost of pre-publication peer-review justified? Can the peer-review process be more transparent and focused solely on improving the science? The current scientific publishing format is based on a pre-internet era when the only way to get published was to have the article printed on paper — an expensive process that required a certain degree of filtering. Those days are long gone. We have yet to fully transition scientific publishing into the digital age. We don’t need to sift as stringently and we certainly don’t need to outsource the process of flagging the most exciting science to a handful of for-profit publishing houses.

What we need is to disseminate new research findings as early as possible and then, and only then, curate the literature. I foresee a future where all scientists would first publish their work as preprints. Journals would have to reinvent themselves to become forums for systematic analysis and discussion of this preprint literature. What we need is not pre-publication peer-review but post-preprint peer-review. Peer-review of the scientific literature needs to be transparent and dynamic. It should become more than just a mechanism to reach that dreaded accept/reject decision. The peer-review itself becomes a much more recognizable unit of scholarship. Review articles and commentaries will become more analytical and influential. Important science will get widely discussed and peer-reviewed by experts in readily accessible and searchable web platforms.

2010 Twitter wisdom

Journals need to reinvent scientific publishing. This is where the future lies — generalized use of preprints with journals serving as forums for curation and discussion of this literature. To achieve this, the first step would be for funding agencies to fully embrace their open science manifesto and mandate preprints for their grantees. Journals would only evaluate papers that have already been released to the public domain as preprints, and therefore totally ban pre-publication peer-review. All evaluations and reviews, whether positive or negative, are made public and linked to the preprint. These reviews should be posted as soon as they are received. This live peer-review system needn’t be limited to editors and commissioned reviewers but can include author led reviewing and crowdsourced community feedback. As the reviews accumulate, the journal editors may engage in the discussion, elect to tag a particular paper and promote a given version of the paper to a formal article. The editor’s role is more akin to curator than gatekeeper. The process doesn’t need to be static. As the community further comments on the article and follow-up studies are published, the journal editors may decide to revise the tags and annotations of the paper to reflect new knowledge and scientific advances.

This vision describes a publishing model that would reinvent the concept of a scientific journal into a live and open forum of scientific debate and analysis. The model centers on a full integration of the preprint ecosystem into the journal interface. In Journals 2.0, I proposed a roadmap towards delivering such a model. Ideally, the first step would be the logical and progressive step of funding bodies mandating preprints for their grantees. This is known as Plan U.

The fundamental feature of this radical scientific publishing model is that every single evaluation and review of every submitted article is published through a fully transparent open process. This would add badly needed accountability and increase scrutiny of the published literature. It would also save time and money by eliminating redundant evaluations and reducing reviewer load. Science and society can only benefit.

--

--

KamounLab
KamounLab

Written by KamounLab

Biologist; passionate about science, plant pathogens, genomics, and evolution; open science advocate; loves travel, food, and sports; nomad and hunter-gatherer.

Responses (1)