Preventing peer review fraud: F1000Research, the F1000 Faculty and the crowd
9 April, 2015 | Rebecca Lawrence |
|
|
Back in November 2014, BioMed Central discovered about 50 manuscripts in their editorial system that involved fake peer reviewers. Following a detailed investigation, they have now started the process of retracting 43 papers that had been published on the basis of reviews from made-up reviewers. Some of these papers were from the original 50 but many more were unearthed by the investigation it prompted. We believe we are in a unique position to avoid these issues on F1000Research as explained below.
A growing problem
This is certainly not the first time we have seen bulk retractions due to fake reviewing, and it seems to be something of an increasing problem, with Retraction Watch estimating about 170 retractions made to-date due to this reason alone. In 2012, the Elsevier journal Optics & Laser Technology retracted 11 papers after an unknown party gained access to the relevant editor’s account and assigned the papers to fake reviewer accounts. In July 2014, SAGE had to retract 60 articles from the Journal of Vibration and Control due to a peer review ring that had assumed and fabricated identities that were then used to manipulate their online submission system. In December 2014, Elsevier had to retract another 16 papers across three journals after an author orchestrated fake peer reviews by submitting false contact information for their suggested reviewers.
Many of these have only been uncovered through chance: referee reports coming back unusually quickly; all being overly positive reports; odd patterns in reviewer email addresses; or noticing reviewers refereeing articles across completely different topic areas. Investigations in a number of these cases suggests some manipulations may be being conducted by third-party agencies offering language-editing and submission assistance to the authors, and it is unclear whether the authors were aware of the fabrication.
Possible solutions
The reasons for this increasing problem are probably fairly obvious to those working in science and I will leave the debate on that to another post. But the question here is how to tackle this specific type of misconduct. There are some simple solutions such as tighter password security for online submission systems (as implemented now in Elsevier’s EES system). ORCID should also assist in verifying a referee’s identity, although their system is currently also not hack-proof.
One of the main ways that these fake reviews have come in is through fabrication of email accounts of author-suggested reviewers – supplying email addresses from free email providers such as gmail, yahoo, hotmail, etc., makes this especially easy to do, or by using a name and email address that look very similar to a genuine researcher which might not be spotted if not checked carefully. In response to BioMed Central’s peer review sting, both BioMed Central and PLOS ONE have turned off the ability for authors to directly enter the names of potential reviewers into their online submission systems.
F1000Research’s approach
At F1000Research, our author-led approach is at the heart of our publishing process and this includes authors suggesting their own referees, following strict guidelines and criteria. On the face of it, this might seem like it would leave us even more open to attacks of peer review fraud compared with other publishers, but we believe our system and processes actually mean our model is more robust. Our authors are encouraged to identify referees from our very large list of pre-approved experts. This list includes all members of the prestigious F1000 Faculty (including 10 Nobel Laureates, 16 Lasker Award winners, over 150 members of the National Academy of Sciences, and many others who have received prestigious awards for their outstanding contributions to science), as well as others who have refereed for us previously and hence have already been carefully checked. We therefore have existing contact information for these individuals that we use instead of any details provided by the author.
If authors cannot find enough relevant experts for the topic of their paper from this list, then they are free to suggest other referees. However, we check all suggestions to ensure the person exists, that they can be verified as working at a reputable organisation, and we then independently source their email address and only accept non-institutional e-mails that have a clear record of publications associated with them. Any referee suggestions that cannot be verified are rejected.
While it can prove a time-consuming job, especially for referees from institutions that do not have well structured websites or where researcher disambiguation can be challenging, we feel it is an important role for publishers to play to ensure the quality and trustworthiness of their refereeing system. For us, it is especially important as our peer review reports form an integral part of the F1000Research publication, are indexed with the paper, and our referee reports are independently citable.
Furthermore, the naming of referees and the publication of their reports alongside the article in the F1000Research system means that instead of just having a single pair of (the Editor’s) eyes trying to look out for hints of something awry, the whole research community can keep a lookout. The benefit of the crowd has been seen many times in identifying fraudulent research that has passed peer review and been published in other journals, and so even if something were to get through our checks, we have confidence that the crowd would quickly spot if there was something untoward with the public referee report.
Ultimately, we feel strongly that the full transparency offered by open peer review aids the self-correcting mechanisms that are meant to underlie all scientific progress.
|