Will the Academic Prisoner’s Dilemma impact eLife 2.0

KamounLab
10 min readNov 2, 2022

--

The life science publisher eLife is doing away with the accept/reject decision in a bold move to reinvent the journal as a publisher of peer-reviews. But why not publish editorial evaluations of the papers that are desk rejected to address concerns that conflicts of interest may influence editorial decisions.

Cite as: Kamoun, S. (2022). Will the Academic Prisoner’s Dilemma impact eLife 2.0. Zenodo https://doi.org/10.5281/zenodo.7274521

The traditional process of scholarly publishing and a toxic publishing culture are among the reasons why Early Career Researchers (ECRs) drop out of academic science jobs. The shock of getting the reviews of your first papers is often a traumatic experience that can be a big turn-off. You worked hard on the project, went through the tedious process of writing the paper, anxiously waited for a month and more to get the editorial decision and reviews. And bang! A harsh reject. If it’s not the condescending “more appropriate for another journal”, then it’s certainly the laundry list of experiments the reviewers think you should do. Generally enough work to keep you busy for another PhD and drain valuable research funds for another year.

But things are improving. Preprints have been a game-changing development and are here to stay. I love ’em preprints because they have this liberating effect of allowing us to share our papers immediately after they are written. Platforms like bioRxiv and Zenodo empower authors to directly share their work with colleagues, cutting the traditional middlepeople of scholarly publishing. Auf Wiedersehen, editors and reviewers. Good bye. Au revoir. Sayonara. We — the people who actually did the research and wrote the freakin’ paper — decide when and how to publish our work.

Well it’s not exactly like that, is it. We still send our preprints to journals, and then we’re sort of back to square one. Editors and reviewers can still reach unfavorable decisions and bury us with their requested experiments. These can be useless, wasteful and at times ridiculous. Here is one example: you submit a paper characterizing a gene of the model plant Nicotiana benthamiana, and reviewers ask for experiments with the tomato ortholog. Why? Is our beloved benthi not a plant? That’s like submitting a paper on a mouse gene, and reviewers asking for similar experiments in rabbits. This actually happened to us twice in the last year. It ain’t cheap doing the extra work. Salaries, reagents and precious time are wasted pleasing the reviewers. This nonsense is slowing down scientific progress. We have much better stuff to do, if you want my opinion.

eLife becomes a publisher of peer-reviews

Science thrives on a vibrant culture of discussion and debate. Open science widens the net. Anyone can access the papers and the data to comment on them. A tweet by someone you don’t know could lead you to think differently about your scientific findings and help you develop new concepts. The dynamic is different from the traditional closed pre-publication peer-review process. We move from elitist old boy clubs to an open door party. We move from an evaluation of whether the work should be published in journal X or Y to feedback on the research. This is healthy for science. There is no excuse why we shouldn’t be doing more of this open dialogue in the age of the internet and open publishing.

With this in mind, it would seem evident that scientists support post-publication peer-review and posting of reviewer’s reports. The life of a paper shouldn’t end the day it’s published. Quite the contrary, it starts on the day it’s posted and shared with everyone. Publish and filter is the motto of scholarly publishing in the digital era. Preprint your papers and the feedback will come from many sources: social media, formal peer-review and impromptu comments from anywhere.

That’s at least the model reformers like Dey Gautam and others aspire to. But toxic habits are hard to change. Many in the life science community continue to cling to outdated models of scholarly publishing that haven’t always served well the community with nonsense articles being regularly published by journals of all shades and reputations. It’s against this background that eLife emerged a decade ago as a breath of fresh air in the publishing world. From its early days, eLife has been a progressive force intent on shaking the status-quo. More recently, Editor in Chief Michael Eisen and team have taken many bold steps that deserve wide support. In 2021, eLife moved to only review articles that have been previously posted as preprints. And in October 2022, eLife has reinvented itself from a classic journal to a curator of preprints and a publisher of peer-reviews. This is how the new process works:

From next year, we will no longer make accept/reject decisions at the end of the peer-review process; rather, all papers that have been peer-reviewed will be published on the eLife website as Reviewed Preprints, accompanied by an eLife assessment and public reviews. The authors will also be able to include a response to the assessment and reviews.

The decision on what to do next will then entirely be in the hands of the author; whether that’s to revise and resubmit, or to declare it as the final Version of Record.

eLife 2.0.

Authors taking full control

Let’s start with the positives. And there are many. eLife has built itself a reputation as a progressive publisher. This new model is bold, even by the journal’s progressive standards. In keeping with the journal’s original ethos, it puts working scientists in control of the publishing process. Preprints have already given authors a formal platform to share their work without interference from Editors and reviewers. The new system goes further by giving authors control over when to stop the review process and upgrade the paper into a “Version of Record”:

At any point following peer review, you can choose to have your Reviewed Preprint published as the ‘Version of Record’. Following author proofing and conformance with our journal policies, eLife will send your paper to be indexed on PubMed.

Open science advocate and eLife Board Chair Prachee Avashti hit it on the nail with this thread — arguably the best among many written about the new eLife. The new system is all about authors taking full control of the publishing process. You, the authors, will decide when the work will “see the light of day” as a preprint, the extent to which the paper is revised, and when to trigger the “Version of Record”.

You have total control over the review process. The editors don’t tell you when your paper is ready to upgrade into the final version. This is what eLife means by abolishing accept/reject. No editor will ever reject your paper once it’s been sent out for review. If this isn’t progress, I don’t know what is.

Accept/Reject is dead, long lives eLife assessment!

The new eLife process doesn’t end with the author opting to trigger the Version of Record. At that stage, eLife editors and reviewers will produce a summary of what they think about the preprint — the so-called eLife assessment. They will use a common vocabulary to ensure clarity and consistency. Essentially, they will rate the paper along two criteria: significance of findings and strength of support.

The eLife ranking system. Badges, anyone?

Instead of the binary accept/reject, we get a ranked list of terms that offer more subtlety to the evaluation. This provides a wider assessment range of the papers, anywhere from desk reject to glowing ratings for significance and support. The original all or nothing accept/reject bins are ditched for a more sophisticated evaluation of the work. This is progress. Kudos.

Let’s wait and see how this will work in practice. It will all dependent on the quality of the reviews of course and the fairness of the process. What this is, really, is a scoring system. It’s easy to see how it will get converted into some sort of badges — the idea has been floating around for years — or even dreadful metrics. I can also foresee someone posting a widget that turns the controlled vocabulary into an eLife article score (the eScore?).

Accept/Reject is dead, but desk rejection is very much alive

Many have responded to the new eLife model by pointing out that although the standard accept/reject decision is gone, a critical triage step is still applied at the submission stage when editors decide whether the paper goes out for peer-review or is desk rejected. Thus reject isn’t totally gone. Your paper can still be desk rejected.

It’s unclear how exactly the process will work. Apparently, the bar will be lower than in the current system where only about 30% of the papers are invited for full submission. As eLife Deputy Editor Detlef Weigel wrote, the plan is to review a “much wider spectrum of papers”.

Desk rejection is a painful outcome for many scientists and is highly prone to bias, whether it’s trendiness, career stage, author reputation or country of origin. If the bar will remains relatively high at ~30%, then eLife editors will indeed become kingmakers with the majority of submissions rejected at this early stage. As eLife knows well, the only way to satisfy the community is openenness and transparency. In both the current and new models, we don’t know which preprints have been submitted to eLife and which ones got rejected. This can be fixed.

Will Academic Prisoner’s Dilemma influence desk reject decisions

Peer-review is an inherently conflicted system. Editors judge the papers of scientists who in turn evaluate the editor’s papers, grants, promotions and institutions. This leads to what James Heathers has coined as the Academic Prisoner’s Dilemma:

I’ve referred to this attitude before as a kind of Academic Prisoner’s Dilemma — imagine Researcher A and Researcher B write a lot of papers in the same area. People within their personal networks review each others papers, review each others grants, and have a mutual interdependence.

If they are both silent with regards to strong criticism of each others errors, they both have the freedom to publish what they want. Direct criticism would quickly devolve into a mutual loss of trust, interfering with the ability to publish papers or receive grant money.

The eLife model — editorial decisions made by active scientists — is prone to suffer from APD — Academic Prisoner’s Dilemma. This explains some of the negative reactions to eLife 2.0 and the fear that editors become unduly powerful gatekeepers. There is a perception that no matter how pure the intentions are, an exclusive elite club of editors will ultimately be biased and engage in some form of gamesmanship. Will the papers of an established scientist have the same likelihood of being desk rejected as those of ECRs or authors from less-renowned institutions and countries? It’s a fair question to ask.

Most scientists suffer from a form of academic paranoia. They think that the evaluation of their work is tainted by non-scientific arguments and that the peer-review system is terminally broken. It’s a hard sell to say: don’t worry, we’re a fair bunch at eLife.

Reviewers tend to avoid being openly critical of colleagues who are in positions of power. I have seen this behavior over and over, anywhere from evaluation panels to Scientific Advisory Boards of academic institutions. Scientists — no matter how famous or established they are — tend to be reluctant to openly criticize a certain category of peers. They fear that there will be payback, that they will pay dearly for their criticism in some way or another: a bad review, a gossip, what have you. Why do you think blatant scientific fraud is rarely reported or called out? We reached a point where we can’t self-police. We have to rely on non-academics, such as science folk heroes Elisabeth Bik and Leonid Schneider, to clean our mess and call out the fraudsters.

Editors should publish the decision letter when a paper is desk rejected

The lingering suspicion that APD and other forms of bias and gamesmanship will drive editorial decisions at eLife 2.0 can only be addressed with a fully open approach to the initial editorial decision. As I wrote a few years ago, editorial decisions will be open and transparent only when…

“…every submitted article (preprint) receives at least an editorial evaluation. Editors evaluate all submissions and post their decision letter and comments on the website independently of whether they send the article for external review. Appeals are also posted on the web site. I expect full transparency to regain the confidence of the scientific community in the vetting process.

Every editorial decision gets published. Editors become moderators rather than gatekeepers.

We need to redefine the role of the editor from gatekeeper to moderator of a scientific discussion of the paper. eLife 2.0 goes a long way in delivering this vision with eLife assessments. Why not do the same for the desk rejected papers and also publish the editorial decisions?

I’m an author who recently had two papers desk rejected by eLife 1.0. I think the editors have articulated a rational and defensible position in their decision letters. I don’t see why these assessments cannot be published and appended to the preprints.

I understand as bioRxiv co-founder Richard Sever and others have pointed out that the desk reject decision can be due to the poor quality of the submitted preprint. Also, the capacity of eLife 2.0 to handle a much larger volume of submissions can become an issue. But, perhaps a system where every article will get a published evaluation will reduce the number of lower quality submissions?

A call for transparency in the editorial decision to desk reject.

Bonne chance eLife 2.0

My aim with this post isn’t to accept/reject the new eLife (Ha ha!). But similar to how the new eLife will probably treat most of my papers, I’m giving the model a measured thumbs up with some recommended changes. This is my eLife-style assessment of eLife 2.0:

In the current version, the statement that eLife has abolished accept/reject isn’t supported by the data. This reviewer strongly recommends that eLife editors post their decision letter on the website independently of whether they send the article for external review. Desk reject decisions should be open and transparent. I’m looking forward to evaluating the revised model.

I haven’t yet discussed eLife 2.0 with my team and colleagues, but I expect we will continue to submit papers to eLife as we always have. However, in cases where the preprint is desk rejected, I wish to see the decision letter posted online or at the least appended to the preprint. If eLife doesn’t do that, perhaps we will post it ourselves. It’s about giving authors full control, isn’t it?

Acknowledgements

I thank Nick Talbot for useful discussions on this topic. Some text was adapted from an older post on PubPeer. This article was written while I was infected with SARS-Co-V2. The virus may have affected my judgement. Please let me know if you spot any typos or errors.

--

--

KamounLab

Biologist; passionate about science, plant pathogens, genomics, and evolution; open science advocate; loves travel, food, and sports; nomad and hunter-gatherer.