UK enquiry into peer review Part I: effectiveness of standard peer review
1 June, 2011 | Rebecca Lawrence |
|
|
The UK House of Commons Select Committee on Science and Technology launched an enquiry on peer review earlier this year, and invited Faculty of 1000 (F1000) to take part. The real rationale for holding this enquiry is still somewhat of a mystery to many of us who were invited to present but it appears to have stemmed from the original enquiry into Open Access about 7 years ago. Ninety-five pieces of written evidence have been submitted to-date and a few of those submitters, including F1000, were then invited to provide oral evidence before the committee.
The standard opening question on these oral sessions seemed to be:
‘Peer review is perceived to be “critical to effective scholarly communication”. If it disappeared tomorrow, what would be the consequences?’
to which most answered that another version of peer review would just be developed instead to replace it. Everyone agreed that peer review serves an important purpose and while it does indeed miss many things (it’s not great at picking up fraudulent behaviour for one), we would be worse off without it.
On the issue of dealing with fraudulent behaviour, Dr Elizabeth Wager, Chair of COPE (Committee on Publication Ethics) and Board Member of the UK Research Integrity Office suggested that:
‘You should say, “Don’t go to a university that hasn’t had at least one person fired for misconduct, because it means they are not looking for it properly”.’
This was later put by the committee to Prof Teresa Rees, former Pro Vice Chancellor (Research) at Cardiff University, and Prof Ian Walmsley, Pro Vice Chancellor at Oxford University who both said they were not aware of anyone ever being fired at their universities for fraudulent behaviour. They clarified that they have their own processes which deal with similar issues in other ways and that their internal processes are reasonably robust, and more robust now than they have been. This of course led to the accusation from the Committee Chair, MP Andrew Miller:
‘Hang on a minute. You have not come across cases of fraud. How do you know that the processes in place to deal with them are robust?’
There was also agreement that the concept of cascading peer review was a good one. When practiced within publishers between journals (e.g. at the likes of Nature Publishing Group, Public Library of Science (PLoS), BioMed Central (BMC)), it works well in saving the journal, authors and reviewers time. However, when practiced between publishers (e.g. in the Neuroscience Peer Review Consortium), the problem of competition comes in and disappointment was voiced with the uptake of the Consortium experiment. To highlight the point further, Nature Publishing Group Editor-in-Chief Dr Philip Campbell stated in his written evidence that Nature Neuroscience wouldn’t have participated in the Consortium if their main competitor Neuron (published by Cell Press at Elsevier) had joined, demonstrating the unfortunate realities of capitalism when it comes to trying to improve peer review.
The overwhelming feeling seemed to be that although standard peer review practices were not ideal and did not pick up all the problems with articles, the publishing community, institutions, funders and other key stakeholders were working on different ways to try and improve it and direct intervention from the UK Government wasn’t required to help solve the problems. Tomorrow, I’ll be posting a second blog covering the discussion that took place in our oral session around the concept of splitting peer review into the two separate constituent parts of scientific analysis versus impact assessment, and touch on some of the related issues around data sharing.
[Please note that the quotes taken from the latest evidence transcripts are still uncorrected documents and the final form of their publication have not yet been approved by the Committee]
Read Part two.
|
When I read the title and tag-line of this article in the daily e-mail (“Most agree that peer review serves an important purpose, and while it does indeed miss many things, we would be worse off without it”), I was immediately reminded of the book review I read just yesterday in *Nature*. The book is *Adapt: Why Success Always Starts with Failure* by Tim Harford Little (Brown/Farrar, Straus & Giroux: 2011) and the review is by Matt Ridley (see http://www.nature.com/nature/journal/v474/n7349/full/474032a.html).
Ridley quotes the book’s thesis as “trial and error is a tremendously powerful process for solving problems in a complex world, while expert leadership is not.” I’ve read elsewhere the claim that we never learn from success, only from failure.
All of this makes me wonder whether we should do more trial and risk more errors in peer review – or, actually, whether we should try some radically new approaches to accomplishing what we think peer review accomplishes.
If I had any ideas for such radical approaches, I’d share them. For now all I have is a feeling that it might be worth a try.
Ken Pimple
http://mypage.iu.edu/~pimple/
Hang on, what’s this about? There seems to be an assumption here that reviewers should adopt the philosophy followed by some news interviewers, asking themselves “why is this bastard lying to me?”. Well, (a) I don’t believe that all scientists are potential fraudsters, and it’s only eagle-eyed reviewers who keep them in check, and (b) I don’t think that’s the only role of reviewers. Good reviewers help investigators find ways to improve the presentation of their data, and to publish. Poor reviewers can disparage and destroy. By the way, they’re all investigators too. If you can get fraudulent scientists (very rare, in my view), you can also get fraudulent reviewers. How are we going to be protected from them?
Thanks for an interesting piece, I look forward to reading the follow-up. However, fraud detection (or rather, the lack thereof) is not the only problem with peer review. Concerns have also been raised that peer competition or personal views can lead to rivals’ papers being delayed or rejected by reviewers, as well as concerns that competitors who review a paper may use advance notice of the paper’s findings to their own benefit.
Dr Margaret Clotworthy
Human Focused Testing
Thanks for the comments. You are absolutely right, there are numerous problems with peer review, and lack of effectiveness at picking up fraud is merely one of them. I am sure I could write several blogs just on the problems; however, as so much has already been written elsewhere about these problems, I wanted to focus more on some of the interesting points made specifically during this enquiry.
Your point Gavin about authors and referees being the same people is a very important one and something I think many forget; indeed these are also the same people who are journal Editors who in many cases not only referee papers but also make the final decisions on these papers.
I am certainly up for testing out new radical approaches to peer review; the difficulty always seems to come when people then try to think of what those might be, but do let me know if you think of any!
Dear Sir,
The job of refrees reviewing the articles is more important than those of editors. I agree that good reviewers are those who make the article better for the profession rather than those who are looking for a weakness to reject the article.
Regarding fraud,it may start right at beginning of the study where junior researchers are motivated to manimulate data according to the wishes of the supervising investigators. Many Indian and Chinese research scholars tell during friendly discussions, that we have to save our residency scholarship so we have to serve the supervising professor according to his wishes. In 1980s and 1990s, people of South Asian origin living in the Western world were proposed have no association of diet and physical inactivity with risk of CAD and type 2 diabetes. Review of questionnaires for assessment of these risk factors indicated that the assessment was very superficial, because the chief investigators wanted to demonstrate that lifestyle factors have no role and risk may be due to increased genetic susceptibility without doing any test for genes. Such instances are very common in USA and UK which are considered to provide best science in the world, as well as in other countries including India and Western Europe. The point I wish to mention is that we continue to have peer review which possibly to my opinion should be open to authors and reviewers.