Open to influence: are peer reviewers affected by the decisions of others?

Tiago Barros tells us about his investigation into how much reviewers agree with each other

Image credit: Mark Bonica, And we all fall down! 2, CC BY 3.0

For many, the lack of transparency in peer review is a problem. There have been calls for moves away from the traditional single and double-blind systems. Indeed, the theme of this year’s Peer Review Week is Transparency in review and the focus is on exploring what this means.

At F1000Research we use an author-led, transparent peer review model and believe this benefits authors, reviewers, readers and the process of science. But could there be a downside? Could this transparency influence the integrity of review?

Investigating our peer review model

In our model, each peer review report is published online alongside the article almost immediately after it is received. This means that in some cases a reviewer can be reading the comments of an earlier reviewer while they are conducting their own review. My colleague Liz Allen and I wondered whether there is evidence that seeing the review provided by a peer has an influence on a subsequent review, depending on the time between peer review reports being published.

We wanted to ensure that we had a large enough dataset so we looked at the first version of articles that were published on the F1000Research platform between July 2012 and February 2017. We excluded reviews, correspondence, editorials, correspondence and commentaries because these articles types are reviewed in a different way to more traditional articles that we publish. We also excluded articles where the gap between reviews was more than a year, leaving us with 1,133 articles and 2,266 reviewer reports to examine.

What the numbers told us

Our analysis showed that the median time gap between the first and second peer review reports being published was 18 days. We then looked at the recommendations of the reports. F1000Research peer review reports allow for three assessments: ‘approved’, ‘approved with reservations’ and ‘not approved’.

On the surface, it looked like that these numbers were remarkably similar across all reports. We found that 63.9% of the first reviewers recommended ‘approved’ and 62.2% of the second reviewers made the same recommendation. The same appeared to be true for ‘approved with reservations’ with 31.3% of first reviewers making this recommendation and 32.8% of second reviewers. And, of course, this then meant that, on average, reviewers one and two recommended ‘not approved’ equally frequently, with 4.8% and 4.9% for first and second review, respectively.

Taking a deeper look

As these numbers were for all reports in total and didn’t compare agreement between the peer review reports for individual articles, we needed to delve deeper. To do this we conducted a statistical analysis using Cohen’s Kappa (κ), which measures the agreement between raters – in this case reviewers.  The levels of agreement between reviewers were based on categories proposed by Landis and Koch: no agreement, slight agreement, fair agreement, moderate agreement, substantial agreement and almost perfect agreement.

The results of our analysis found that there was fair agreement (κ was between 0.302 and 0.352) between the first two reviewers on article, regardless of the time lag between review reports. As a control, we looked at reports that had been published simultaneously, and found the same level of fair agreement (κ = 0.282) between reviewers, suggesting that at least to date and among our sample, reviewers are quite independently minded and not overly swayed by the opinions of others. I have just presented our findings at the Peer Review Congress in Chicago. You can see our poster with the full findings and details of our analysis here.

It will be interesting to know if this remains consistent across other F1000-powered platforms that use the same transparent peer review model, such as Wellcome Open Research and others that are soon to launch – and with the growing shift among researchers and publishers to embrace more open models of peer review. With the growing number of platforms, Liz and I could have our work cut out for ourselves!

Related Posts

previous post

Peer review: under revision

next post

Peer Review week: sharing more thoughts on transparency

User comments must be in English, comprehensible and relevant to the post under discussion. We reserve the right to remove any comments that we consider to be inappropriate, offensive or otherwise in breach of the User Comment Terms and Conditions. Commenters must not use a comment for personal attacks.

Click here to post comment and indicate that you accept the Commenting Terms and Conditions.

Leave a Reply

Your email address will not be published. Required fields are marked *

*