As new venues for peer review flower, will journals catch up?

By now, you know about preprints, and I bet you’ve read some, too – perhaps a manuscript posted on PsyArXiv, BioRxiv, or MedRxiv. With the posting of unrefereed manuscripts now normalized in psychology and other fields, no longer must new findings gather dust while languishing in journal management systems, waiting for slow reviewers, a busy editor, and a time-consuming revision process. New findings can be available to others immediately.

But peer review is important, and if its benefits are sufficiently large, the tradition of a long interval between submission of a manuscript and its becoming available to other researchers can be worthwhile. But the costs of this delay can also be significant.

Thanks to the present crisis, some of the costs of delays are now obvious to everyone, at least for some findings, such as those related to new COVID-19 treatments or policies.

I’ll let others debate in what cases the benefits from distributing manuscripts prior to peer review outweigh the negative effects, such as a possibly greater spread of spurious findings. Whether one likes it or not (for the record, I do like it), preprints aren’t going away. The dam of tradition has been breached, and the resulting flood of preprints might conceivably be slowed, but it won’t be drying up.

Given that preprints are here to stay, the field should be devoting resources to getting them certified more quickly as having received some amount of expert scrutiny. This is particularly important, of course, for preprints making claims relevant to the response to the pandemic.

In many cases, one component of this certification is already happening very quickly. More publicly-available peer review is happening today than ever before – just not at our journals. While academic journals typically call on half a handful of hand-picked, often reluctant referees, social media is not as limiting, and lively expert discussions are flourishing at forums like Twitter, Pubpeer, and the commenting facility of preprint servers.

So far, most journals have simply ignored this. As a result, science is now happening on two independent tracks, one slow, and one fast. The fast track is chaotic and unruly, while the slow track is bureaucratic and secretive – at most journals the experts’ comments never become available to readers, and the resulting evaluation by the editor of the strengths and weaknesses of the manuscript are never communicated to readers.

On the fast track, comments on social media and preprint servers are available immediately. Some are made by experts from the same field as the manuscript authors themselves, and others are experts who bring valuable perspectives from other fields.

What these online discussions don’t have is curation – no sorting of the wheat from the chaff, and no clear indication that a relatively disinterested expert has found certain comments to be reasonable or valuable. A second missing element is an incentive for quality comments, such as a possibility of being recognised for one’s contributions.

These two missing elements, curation and official recognition of the research, is one of the core functions of journals. A journal can be seen as essentially just a collection of respected academics carrying out a process that is (at least putatively) relatively unbiased, which results in a certification (“publication”) that a contribution meets some standard of quality.

Will we need to reinvent the scientific journal wheel, or will legacy journals catch up with the modern world, by both taking advantage of and adding value to the peer review that is happening on the fast track?

One journal already is. From its founding in 2018, the peer-review process of Meta-psychology, a free open access journal, has revolved around preprints. Prospective authors first post a preprint to PsyArXiv, and then submit it for consideration at Meta-Psychology by providing the link to the journal. The journal editors consider whether the manuscript is appropriate enough to be considered further and if so, the manuscript authors are asked to add a notice on the first page of their preprint: “Submitted to Meta-Psychology. Click here to follow the fully transparent editorial process of this submission. Participate in open peer review by commenting through Hypothes.is directly on this preprint.”  From the journal’s Twitter account, the journal editors issue a tweet, announcing the open call for researchers to comment on the manuscript using the free web annotation tool Hypothes.is. Hypothes.is comments appear online as soon as they are made, so anyone can read them.

In addition, as at traditional journals, the action editor sends requests to specific individuals they believe could provide good reviews. But unlike most journals, Meta-Psychology instructs these hand-picked individuals to use Hypothes.is, in the same way as those responding to the open call.

When I submitted a manuscript to Meta-Psychology last year, I wasn’t optimistic that any would respond to the open call for reviewers; the manuscript was a commentary on a rather specialized topic. But two experts did respond, taking the time to log into Hypothes.is and provide constructive feedback and criticism. It was nice to be able to see the process unfolding, rather than wondering for months what was going on until something popped out of the black box of a journal management system.

Once the comments of the two entirely volunteer experts were combined with the comments of the three hand-picked by the editor, I’d say that my manuscript had been fairly extensively reviewed. After I responded to all the comments, using many of the points to make improvements to my manuscript, the editor accepted my manuscript. The final version was put into the journal format and posted on the journal website.

Since late last year, the open-access publisher PLOS has also been experimenting with open, crowd-sourced peer review. Already from 2018, authors could opt into PLOS posting a version of their submitted manuscript to bioRxiv. Now, at four PLOS journals, editors are encouraged to consider any comments made on bioRxiv when the editor is ready to write their decision letter. Those comments that are included in the decision letter become part of the published peer review history upon publication of the article.

History would support some skepticism that large-scale commenting outside of journal management systems will persist. PLOS began allowing readers to post comments on articles over fourteen years ago. Some of us were very excited about this. As time passed, fewer than 10% of PLOS articles had comments posted on them, which was disappointing to some. PubMed added a commenting facility in 2014 (eleven years after a colleague and I made a case for it but shuttered it in 2018), saying “It just wasn’t turning into a major point of discussion for the research community”. One reason for the historically somewhat low rate of commenting is that few articles had any relevance to an urgent crisis.

In a crisis, one advantage of open fora is that researchers can, on a dime, direct their attention to whatever is most important at the time. No editor needs to take the time to find new associate editors with expertise in the newly-important area who have spare time on their hands and know who are the good reviewers in a field. A shift in how research time is allocated can happen at the grassroots, unhindered by centralized processes and their associated delays.

Sometime after the rapid response at the grassroots, this new activity can be enhanced with some organization and systemisation. For COVID-19, a small team deployed Outbreak Science Rapid PREreview, which helps directs volunteers to comment on pandemic-related preprints that the authors or other users have requested comments on. A short review form elicits structured input on the quality and importance of the research reported in the preprint. Researchers can register with their ORCiD so that others can see the expertise of the commenter.

Thanks to the crisis, as well as factors such as the growing ubiquity of social media, the social norm in science against commenting openly is changing. But an additional important factor hindering open evaluation has been the inability of researchers to get career credit for even the most extensive and constructive of comments. This is also essentially a social norm, but one held in place partially by a technological gap: that comments did not become part of a durable record that scholars could have some confidence would be available in perpetuity. Several years ago, I had an illuminating exchange with some other researchers thanks to the commenting facility put in place at Trends in Cognitive Science by its publisher (Elsevier). About five years later, the publisher completely lost the exchange of views when they updated their system.

As part of its pilot to involve bioRxiv comments in the review process, PLOS publishes its decision letter as part of the article, with a subsidiary DOI, consecrating in a permanent record the comments mentioned in the decision letter. Similarly, Meta-psychology mentions the names of all who made substantial comments in the published article itself and creates an easily citable DOI for the review history as a document, with the reviewers listed as the authors of this document.

These efforts to make records of peer review open and long-lasting are laudable because the comments made during peer review are, sometimes, substantial contributions to scholarship that advance our science. They also surface experts’ differing views of and perspectives on a finding, which can be invaluable to researchers following the area and also to journalists and policymakers.

No one wants to rely solely on what researchers write about their own work, even if the researchers have been forced to moderate some of their claims by a hidden process of peer review. Journalists, policymakers, and other readers want to get a line on what unaffiliated experts think, especially those who have delved into the guts of a paper or the data behind it. While such commentary and criticism sometimes does eventually appear in the traditional published record once others in the field have had time to write their own papers, it’s often quite hard to find and interpret, embedded as it tends to be in a narrative designed to advance the new findings of the commenting researchers. And in a crisis, it comes too late.

Eventually, I believe it will be standard practice to cite peer reviews and accept those reviews that are influential as legitimate entries on the CVs of the researchers being considered for a position, for a grant, or for promotion. But how long will this take? Those of us who are in a position to hasten that day, for instance as members of journal editorial boards, or through our affiliations with scientific societies, have a responsibility to do so. Research evaluation, and the certification that a certain quality of evaluation has happened, is too important to leave to a delay-prone system that relies on waiting for the slowest of half a handful of expert reviewers.

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like

1 Comment

  1. This is a well-written, thoughtful, and thought-provoking piece. I hope that meta-scientists are working on assessments of open reviewing. I appreciate the arguments in favour of it, which have considerable force. But I am concerned about potential ill effects of making reviewing a public performance. In the current cosed review journal system, I think that most reviewers write reviews with the aim of advising the editor and the authors. Of course, some people provide unhelpful reviews, but in my experience the vast majority (of hand–picked reviewers) are thorough and constructive. Even if open reviews are posted anonymously, open ones are written for a wide audience and thus may influence reviewers toward playing to the crowd.

    Of course, even if that does tend to happen, it may be that the advantages of open review outweigh the costs. Similarly, I sign my reviews, and that practice may influence me toward managing the impression I make on the author and that may at times work against doing an opimal job of advising the editor. But I think the advantages of signing (for me) outweight that cost.

    Anyway, nice posting.

    Steve