Enhancing Peer Review of Scientific Reports

Academic peer review of scientific manuscripts often falls short. It invariably slows and sometimes prevents the publication of good research. And it sometimes leads to the distribution and amplification of flawed research. Prestigious journals sometimes publish research grounded on shaking theory that used weak measures and inappropriate analyses to reach dubious conclusions. Failings of peer review play a principal role in those problems.

Journal editors typically perform journal tasks off the side of their desks, on top of everything else. They may handle manuscripts outside of their expertise. Sometimes it’s difficult for them to know who to ask to review. When they do identify prospective reviewers, many decline the request or don’t reply. When I was Editor of Psych Science, I often sent 6 or more invitations to get 2 acceptances. Thus, it sometimes takes weeks to get a few people to commit to reviewing. Some of those fail to deliver on time or at all, despite multiple prompts (doubtless sometimes for good reasons – one never knows what’s going on in another person’s life).

So even under the best conditions, peer review takes a long time. And rarely is the decision to accept as is. Good news for authors is a reject/revise/resubmit decision. Thus, it typically takes many months between initial submission and eventual acceptance. The other day I saw an email about a manuscript submitted to Psych Science in January that had just been accepted. I thought, “Wow! Quick!”

Is that time well spent? I believe that generally it is. Many reviewers provide assessments that are detailed, clear, insightful, well-informed, and constructive. Many editors strive to understand the work well enough to fairly assess the manuscript and, if it has potential for their journal, making the manuscript as good as it can be. As an author/co-author I have many times been furious with editors and/or reviewers (indeed, even now it takes me days to steel myself to read an action letter), but very often editors’ and reviewers’ input has (I believe) led to major improvements.

Of course, some reviewers are unkind or (worse) unhelpful. Some say almost nothing. Some are bombastic, self-promoting blowhards. Because of the difficulty of securing reviews, editors typically rely on Ns of 2 or 3—a scant basis for generalization. Not infrequently, different reviewers say wildly disparate things. And even thoughtfully prepared, timely, constructive reviews sometimes get things wrong. Editors don’t always perform an independent evaluation and/or provide the degree or quality of oversight they should, so sometimes they get it wrong too. Peer review is far from perfect.

The flaws of peer-reviewed journals have led some to argue that scientists should just share reports of their work on open-access online platforms. The idea, as I understand it, is that fellow experts will offer critiques and suggestions that will in turn lead to revisions and/or follow-up work. The wheat will be separated from the chaff, and the best science will rise to the top. Tal Yarkoni wrote a characteristically engaging and persuasive paper advancing this idea in 2012.

[BTW, most journals are OK with authors posting “preprints” on centralized archives (see, e.g., Psychological Science). An Editor might decline a manuscript on the ground that the work had already had most of the impact it was likely to have due to a preprint attracting many readers. But that would, in my opinion, be boneheaded.]

I believe that despite their flaws and limitations, journals in general and peer review in particular serve important functions (see my blog post from 2017). I personally would not post anything on PsyArXiv without first seeking critical input from several informed experts. I also wouldn’t submit a manuscript to a journal without first getting critical feedback from at least one wise person outside of my lab. If unrefereed PsyArXiv “preprints” became the coin of the realm, with scientists posting papers whenever the spirit moved them, I expect it would be a colossal mess in many scientific domains.

The drawbacks of rapid pre-review distribution are illustrated by a recent Twitter shitstorm about a MedArXiv preprint claiming that SARS-CoV-2 RNA in sewage sludge is “a leading indicator of COVID-19 outbreak dynamics.” Naturally, this went viral. Uptake of news that the claim was based on a statistical artifact has been more sluggish.

In a thought-provoking posting here, Alex Holcombe advanced arguments for a hybrid model in which scientific journals draw on spontaneous peer reviews of preprints. Alex’s thesis was not that psychological scientists should forego formal certification. Rather he argued that journals, as curators purveying certification, should draw upon spontaneous feedback on preprints when it is available.

Alex described the approach taken by the open-access journal Meta-Psychology. Authors first post a pre-print, then submit it to the Editor. If the Editor thinks the manuscript is worthy of review, then they Tweet an invitation for feedback. They also solicit two or three expert reviews as per usual. Alex reported that, for a manuscript he submitted to Meta-Psychology, two people volunteered constructive reviews, which (along with two solicited reviews) led the Editor to invite a revision that was subsequently accepted. I note that the article has drawn nearly 300 PDF views (Holcombe, 2019).

This is an interesting approach with lots of potential. I would like to learn more about it. What percentage of submissions draw spontaneous peer reviews? What determines whether a manuscript does or does not attract volunteer reviews? Are editors more likely to accept papers that draw volunteer reviews? Who proffers reviews, and why? How does the editor weigh these volunteer reviews?

My intuition is that the Yarkoni vision of self-correction emerging spontaneously from public posting of scientific reports might work well in small subfields with strong theoretical grounding and sophisticated methods, such as computational modeling or visual psychophysics. It is not a coincidence that these are the sub-domains in which the problems that led to concerns about a “replication crisis” are least severe. Meta-Psychology might well be another domain in which spontaneous, community-driven correction of preprints would thrive.

In addition to promoting the interesting model used by Meta-Psychology, Alex Holcombe’s post argued for open reviews with reviewers gaining professional credit for their reviews. First, I note that in the current system psychologists can get credit for writing reviews. It is appropriate to list journals and granting agencies for which you’ve done reviews on your cv and in your annual performance reports and tenure/promotion materials. Promotion committees, in my experience, pay attention to such contributions. Also, Editors notice who does good reviews and who doesn’t (rewarding the former with additional invitations to review; that may seem poor payment, but if you want to be appointed to editorial boards and perhaps later become an editor yourself, establishing a reputation as a strong reviewer is key). If you sign reviews, authors, too, will give you professional credit for providing useful constructive criticism (although it may take them a while to come to appreciate it).

All of that said, I agree that good reviewers are typically under-compensated for their contributions. Especially given the fact that more and more often reviewers have an opportunity to assess a preregistration document, materials, analysis scripts, and data, providing a full-on review can be a major endeavour. Open signed reviews (with DOIs of their own), as advocated by Alex, would be a way to recognize and preserve those contributions.

But here again I have mixed feelings. It seems reasonable to hope that making reviews public would improve their quality, which would be a good thing. More reward for good reviews would also be a good thing. But (as I posted in response to Alex’s piece) I am concerned about potential unintended ill effects of making reviewing a public performance.  I am not opposed to open review, just ambivalent.

One potential cost of making reviews open is that it might make it even harder than it already is for editors to recruit reviewers. Maybe I’m wrong about that – maybe the rewards of reviewing openly would make it easier to recruit reviewers. But my hunch is that it is a bigger ask to invite an open review than a closed one. Especially if the review is to be signed.

A deeper concern has to do with the shift in audience for open as opposed to closed reviews. I believe that, in the current closed review system, most reviewers view themselves as providing a service to the editor, journal, and field. Most write reviews with the noble aim of advising the editor and the author as to the strengths and weaknesses of the manuscript. In my experience the vast majority (of admittedly hand–picked) reviewers are thorough and constructive. Even if open reviews are posted anonymously, they are written for a wide audience. I worry that this may influence reviewers toward playing to the crowd, grandstanding, self-enhancement, etc.

Of course, even if that does tend to happen, it may be that the advantages of open review outweigh the costs. A related debate concerns anonymous versus signed reviews.  I sign my reviews, a practice that may influence me toward managing the impression I make on the author. That may sometimes work against doing an optimal job of advising the editor for the betterment of science. But I believe that the advantages of signing (for me) outweigh the costs. So too, the positive effects of making reviews open might outweigh any negative ones.

As Simine Vazire recently tweeted, “The side effects and consequences of changing peer review are probably super complex and hard to predict.” What an exciting time for meta-science! In addition to exploring ways to change peer review, efforts are underway to improve it (e.g., Aczel et al.’s 2020 Transparency Checklist; Davis et al. (2018) Peer-Review Guidelines Promoting Replicability and Transparency). No system for curating scientific reports can work perfectly, but there is lots of room to improve the currently dominant approach and I am optimistic that psychology will make important contributions toward doing so.

Author Note

Thanks to Laura Mickes for inviting this post, and to Alex Holcombe for feedback on a draft.

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like