Peer pressure
9 August, 2010 | Richard P. Grant |
|
|
I guess we’ve all had our fair share of unreasonable reviewer comments. But does that mean that peer review is broken? Peer review is inextricably linked with citation metrics and the impact factor, but for now, let’s just focus on that single aspect.
Is peer review broken? Is the fact that we’ve noticed problems with it actually a sign that it is indeed working and can (and should) be fixed (and how should we do that?); or should it be thrown out entirely, and replaced with something else?
How about an analogy? Peer review is the car you learned to drive in. It still runs, but has a bit of rust and a few squeaks, and you could probably get more mileage out of her. Do you drive on, because it still works? Perhaps throw some Redex in the tank, change the plugs and oil filters? Would you fit a new exhaust, or perhaps go as far as replacing the entire engine? Or maybe it’s time to trade her in for something new, but untested?
Tell us what you think, and vote in the new poll (across there on the right).
|
Has anyone come up with more than anecdote to say peer review is broken? It’s not perfect, but what is?
There were some (pretty poor, I think) studies in medical journals that seemed to suggest that PR was no better than chance: but I seem to remember being unconvinced by what they were measuring.
You’ve got to look at what PR claims to do, and what it doesn’t, and I think a lot of the critics don’t do that.
Oh dear. Did I scare everyone else away?
Looks like it, Bob. Say something controversial.
Errm.
Anyone who wants to chuck away peer review is an idiot and deserves to be served up in the Na ture canteen’s Baked Mediterranean Ratatouille.
That might improve the Ratatouille.
The problem though isn’t “Is Peer Review Broken” but rather “What can we reasonably expect Peer Review to do?”. If you want an infallible filter that prevents rubbish from being published and gives an accurate assessment of the future importance of papers at the time of their publication, then you are out of luck.
Bingo. For me, anyway–when peer review is used as it says on the tin, it works well (if frustratingly!). It’s when it’s made out to be something it isn’t that we run into trouble.
That’s not to say it can’t be improved, of course.
I think, for the most part it works–the problem is trusting that a potential competitor who reviews your work, is actually fair. Humans are humans, afterall and the anonymous nature of the process allows for dishonest or self serving reviews to happen. Too many editors dont catch the biased reviewer
I think you’ve hit a spot there; one of the weakest bits in the peer-review process currently seems to be the editor. In the end it is her/his decision that counts but often it seems they go by a majority vote amongst the reviewers.
Amazing that these posts are at such a juvenile level–I like it vs I don’t like it. Scientists are supposed to give the grounds for their conclusions. I think it needs fixing for the following reasons:
1-The data underlying the summary statistics are not available to the reviewers for alternative analysis–which is web feasible.
The journals don’t have adequate statisticians on board.
2-the reviewers are mostly not up to doing alternative analyses or even understanding the applicability of fancy techniques. They often are not expert in the specific field.
3-peer review is a freebie–some reinforcement heightening is needed.
4-there is no system of peer reviewed post publication review and follow up linked publication.
5-There is no ongoing inter-rater reliability study.
What the heck is “reinforcement heightening?” Is that like paying them? As you say “Scientists are supposed to give the grounds for their conclusions.” What are your grounds for your conclusions?
For example,
2)What is your hard data that “reviewers are mostly not up to doing alternative analysis or even understanding the applicability of fancy techniques. They often are not expert in the specific field.
4) This exists. It is called referencing and hyperlinking. If an article is referenced it is likely to be important. Most online journals now provide hyperlinking to articles in both directions.
5) How would an ongoing inter-rater reliability study improve peer review? Again, these are words without substance.
Your post certainly didn’t improve the level of discourse. It reminds me of lectures by science Faculty that use lots of big words but basically say nothing.
Unfortunately and fortunately, it is necessary. I think the fundamental “problem” is its attachment to the reward system in the cosmos: promotion, etc. It is that link that should be redesigned, assessment, evaluation, may be inevitably needed.
I voted for “it needs tweaking,” but I agree with ‘bill’ that the fundamental problem is peer review’s role in the reward system.
I believe (without actual proof!) that peer review works well much of the time at weeding out very bad articles and improving flawed articles. It takes responsible reviewers and editors to work, though. I hate to increase the burden on editors (who have a hard enough time finding reviewers), but I think some reviews are so clearly mean-spirited or off-track that they should just be tossed in the trash, not sent to the author or used to decide publication/no publication. Perhaps sometimes reviews should be paraphrased, which would be much more work for editors. But most importantly, we shouldn’t expect peer review to carry a heavier burden than it is able. Publication should not be the most important metric of success – or perhaps it should be the most important, but something has got to be a close second, and something else a close third, which doesn’t seem to be the case right now.
If I had solutions, I would share them, but the best advice I have to offer is that there should be wide-spread discussion of the reward system, the proper role of peer review and publication in it, and how we might come to a more rational and effective system.
Ken
Current peer review is based on anonymity. Why? The only reason that is truly applicable is that people on the receiving end of a bad or unfair review might try to retaliate if they knew the identity of the reviewer. But this basic assumption of anonymous peer review means that scientists cannot be trusted, i.e., they are subject to baser human qualities (e.g., revenge). If this is so, then the people doing the reviews must also be capable of this same behavior and can preemptively hurt a rival or disadvantage the competition. Thus, anonymous peer review is based on a fatal flaw of logic. It can only be straightened out by: (1) publishing all papers without review (it may not be as troubling as one might imagine – truth will stand the test of time) or (2) abolishing anonymous peer review and having the reviewers’ names out in the open. Much greater care would be taken in the review process if the latter solution was applied. This is an even more important issue for the anonymous peer review of grants where actual money is riding on the outcome and competition for scarce resources is fierce.
I would say that the review of grants is not truly anonymous, because you can see the roster and in most cases figure out who reviewed it.
The process for grants is a bit different than for papers especially if you meet with a group–you have the fact that the review (if the grant is discussed) is subject to what 10-15 people in the room think–ie it is fair/reasonable. With papers–there usually arent people in the room–whereby you have to defend, etc your viewpoint.
I do agree about the flaw in logic.
Fair enough. However, the’safety in numbers’ argument is a slippery slope. Most were once in favor of the Iraq war and are against gay rights (marriage, etc.). It is interesting that you(we all) try to “figure out who actually reviewed it (the grant).” This speaks to my first point – the retaliation motive. Or alternatively, it means that we may try to change the grant to please the reviewer, which gives them an unfair advantage (an a priori infallibility, if you will) in the process. This is again a fatal flaw of the current anonymous peer review system, no matter how it’s sliced.
Also, I prefer a solution to funding somewaht along the lines suggested by Marc below, but I agree with Ken that the metrics need to be better defined.
There’s at least one reason, other than fear of revenge, that anonymous peer review can be useful.
I generally prefer not to review anonymously because I like to take responsibility for my work. Due to the peculiarity of my employment situation, I am also not worried about anyone trying to get revenge on me for a bad review I wrote. However, sometimes I accept or seek the cover of anonymity because it takes so much more effort to point out flaws in a friendly way than simply being blunt. Some articles have major flaws that need to be articulated; doing so without running a risk of hurting someone’s feelings is pretty difficult. It might be cowardly of me, but I feel better about submitting a harsh (but not mean-spirited) review anonymously than submitting a weak, perhaps inadequate review with my name attached.
Ken
I support the proposal that peer review of papers and grant applications should not be anonymous. If an anonymous reviewer’s comments indicate that he or she has not read or misread a grant application, there are no consequences to that reviewer. Thus, reviewers can say whatever they please and they will get away with it.
If a paper or proposal is being evaluated on scientific grounds, the reviewer should have no hesitation to publish her or his name. This will not only add civility to the comments, it will also put the scientific credibility of the reviewer on the line.
And there’s also (3), which is a variant of (2), where the author is required to find two or three peers with a public track record of academic achievement (one of them a statistician, perhaps), who are willing to be named as ‘endorsing’ the paper in question. Endorsed papers would be published immediately. Very few, if any, such peers would endorse a paper that they saw as too flawed for publication, and the end result would pretty much be what is now achieved by traditional peer review, if not better and faster. It would likely reduce the often seen multiplication of effort now inherent in the ‘cascading’ down effect of papers being rejected, resubmitted elsewhere, rejected, resubmitted elsewhere, and finally accepted elsewhere again, all the while levying a burden on the pool of reviewers and unnecessarily costing time.
@Jan Velterop
instant peer approval by recommendation is a very good idea; if paper publication speeds could be increase science would benefit as the publication and hence scientific infoamtion flow would progress smoothly and swiftly, it taks 6-12 months typically to get into a good journal, and if it is rejected, it may be outright rejected after waiting maybe 3 months, but as you cannot send to more than one journal, you then have to submit to a second journal and spend another 3 months ish waiting, progressing down your descending order of preference of journal for your article.
Well, the studies that say peer review does not work much better than chance were peer-reviewed…
I think peer review of manuscripts is very seriously flawed and peer review of grant applications is terminally flawed, especially in a 10 percentile funding environment.
When I review a paper I feel I am fair because I know the field and I like to think I am fair but I recognize it is a very subjective feeling and that by definition I am unaware of my unconscious biases. I have lost track of atrocious reviews of papers authored by myself or by colleagues. When reviews are that blatantly besides the point, the reviewer is disqualifying him/herself and the editor should be able to step in. However, too often editors don’t function as editors and simply tally thumbs up/down.
When I review a grant application, I am extremely uncomfortable because I do not have a crystal ball and there is no way of predicting how the science is going to turn out. And nobody has a crystal ball…, so grantsmanship is getting assessed while creativity, new paradigms are being shafted. Oh well…
The fact is that despite peer review, papers get through that a) are not even technically sound, b) incoherent, c) irreproducible and d) outright fraudulent.
The fact is that peer review rejects many papers that are great advances. There is a spanish web site (sorry, don’t have the url handy) that tallies the papers leading to nobel prizes that were initially rejected and it is a substantial fraction of them, close to half of them if memory serves.
In physics, at least there is the preprint system, which prevents unscrupulous reviewers from sitting on a paper before rejecting it, while quickly repeating the experiments to scoop it.
Now at least we have PLoS One that accepts any paper that is technically sound, which is a great improvement.
In the end, it does not matter where a paper is published when evaluating for its impact on your own research, you still need to critically evaluate it, so you are in fact rereviewing it.
I like EMBO Journal’s new approach. I think reviewers should no longer be shielded by anonymity, I think people write differently when their name is attached, it would prevent a lot of minor and major abuses of the system.
For funding, it should be based on past productivity, reviewed every 5 years or so on the last 10 years. The productivity not tallied by numbers of publications X impact factors, but by the citations of the 5 most significant papers. Higher productivity, higher funding; sustained productivity, same level of funding; lower productivity, lesser funding (not no funding).
My 2 cents.
This is a constructive contribution; thank you.
I very much fear that you might be correct that “peer review of grant applications is terminally flawed, especially in a 10 percentile funding environment.” I don’t care for your solution, though, because the number of citations results from the number of peer reviewed publications. Doesn’t that mean that the flawed peer (publication) review system is thus set up as the final arbiter of grants as well, with no independent judgment applied to the particular project? Or am I missing something?
Ken
You are not missing anything! I was not complete in my reply, I was not expecting tough reviewers like you…, I was just writing down what came to mind without turning it into a treatise on peer review… 😉
I wanted to emphasize the absurdity of using the impact factor (4/5 of it is based on 1/5 of the papers) when you can use the more direct measure of the citation attributed to the paper under consideration.
In my ideal world the peer review system for papers would have been fixed so that the citation count would be more meaningful… 😉
Citations are flawed too, besides the issue of peer review, because they are context independent, they do not distinguish between papers that are cited because they make a contribution and those that are cited because they were proven wrong…
At least with publications, you have the recourse of immediately resubmitting to another journal, whereas with grants you have to wait the next cycle and your career may be “terminated” before you get there…
So what I am trying to say is that if you want to use a metric, citation is more meaningful than impact factor, but it remains flawed. What is needed to evaluate a colleague’s body of work is a knowledge of his/her field so that his/her contribution can be discerned. It takes more time than counting citations and there are not necessarily people on the panel who can/will do a fair assessment.
Now, when you take only the 5 most significant papers from each candidate in consideration, I think the ranking of the candidates would be more meaningful than any type of metrics. The advantage of such a system is that once these are the rules of the game, salami publications and citation gaming are no longer incentivized and people will be motivated to come up with fewer but more meaningful publications, for the benefit of all, editors and reviewers included.
Peer-review is not broken and certainly you can go to any journal and find papers published for which you may have an issue. I have to run a small journal, it takes time to ensure the process is fair, but that is my job and a responsibility I have accepted. My institution and chair has accepted that it will take time to do this unpaid job. (To be fair, I get a “free” subscription just like everyone else on the editorial board.
Peer-review is essential and has well over 400 years of use in Western science. Blinded peer-review is essential to avoid numerous pitfalls. I just post to the article in-depth comments that I will forgo making here.
However, peer-review of manuscripts is a bit different from peer-review of grant applications and the two processes should not be discussed at the same time. Both are essential, but they are two different beasts.
The problem is not with the peer review system but the people who implement it. Some scientists are more ideologically motivated than scientifically objective thus will block papers that would contradict their cherished beliefs. Then we have the situation of awarding the PhD to those who are actually well trained, highly skilled technicians not scholars. Substandard research can usually be shopped around until a journal is found that will publish the results.
It has been my experience that really smart people are not much different than everyone else, in some case they are worse when it comes to ego, pettiness, lack of social skills etc. These personality defects make for some very selfish unenlightened decisions. Most academics have a flawed view of themselves and their capabilities. Until the system collapses and newer people take over there is nothing that can be done.
Experience changed my notion of peer review greatly when I became Director of Research in a Biotechnology company.
We needed to use published protocols for very practical objectives. We found many procedures could not be repeated reliably. Controls were frequently lacking, forcing us to carry out further research to discover missing components. Sometimes we found that cited methods had been modified without mention. In other instances we found uncontrolled parameters could substantially change the results, or reduce apparently certain results to ‘statistical variables’. Peer Review became a joke around the laboratory.
After awhile, I concluded that most reviewers must have ignored Materials & Methods; and, many authors should have reduced Conclusions to a single statement; or, perhaps, omitted any conclusions entirely.
Now retired and carrying out field research without perishing [yet], I am disturbed that Nature has reduced Materials & Methods to a ‘Methods Summary’ [details found somewhere online].
The end-point of this thinking may be to reduce Results to a ‘Results Synopsis’ [numbers and graphs found somewhere online].
I do so miss the long and chatty papers of Hans Krebs; Melvin Calvin; and, similar papers of others like them. I am reminded that we read and re-read classic papers because of the Methods and Results, not the Conclusions.
Perhaps, today, we should write papers with the following disclaimers: [1] Methods are Proprietary and subject to the following Patent applications; [2] Results are Confidential and subject to Copyright and FDA Approval; and, [3] These are our Novel Conclusions, which may also be found in our next grant application.
I recommend that peer review focus entirely and only on the Materials & Methods, with a strong nod toward Results consistent with the Methods; and, entirely ignore conclusions, scope, or whatever else. A paper stands on it Methods and its Results, the rest is ‘commentary’.
Wayne – that is brilliant, thanks for taking the time. I also cannot stand the “go and look somewhere else for the methods and data” approach that is so popular these days. I don’t care about the “discussion of plausible reasons why our results might make biological sense” (have I written these before? Oh yes, guilty as charged…). I’d rather see the data upfront, and how it was generated – in detail.
Just letting you know you’re not a lone voice howling in the wilderness. There are two of us, at least.
Make that three.
Do not make a publishing decision on a peer’s comments. You can publish if at least one peer gives go ahead out of five and then after citations give briefly what peers had to say about the paper. You can even categorize: green dot, yellow dot and red dot and invite comments from readers. Publish gists of those comments in the next issue.
That’s *almost* what F1000 does–but our Faculty (not just any reader) publish comments after peer review and publication.
Speaking as an editor, I will say that everywhere I have worked, the staff and volunteer editors take reviews very seriously. At my current journal, for example, we have weekly meetings with the academic Editors in Chief (unpaid), and we discuss reviewer assignment, problem reviews, etc. at every one. When the academic editor is not satisfied with the reviews, s/he invites another reviewer. Peer review is difficult and time-consuming and imperfect; however, I see it as working pretty well. (Of course, my experience is anecdotal and thus not scientifically sound.)
I’m curious about the alternatives, though. What would the aim of ending anonymity be? Would post-publication peer review work? Naming the reviewers would be easy enough to implement, and I can’t think of a downside from my perspective — but I also can’t think of what problem that would solve, except for perhaps innocent curiosity. The more radical change, post-pub review, would make the process transparent (a plus!), but would also allow the spread of nonsense, debunked later or not (a minus).
The entire argument just seems squishy. While I disagree with Donald Klein on the factual basis for his individual points, he’s right about this: “Scientists are supposed to give the grounds for their conclusions.” Even if we accept as given that peer review is broken, unless we can set up a way to measure “success,” we’ll never be able to fix or replace it.
I hope you don’t mind a non-academic jumping in on this discussion. I love this stuff!
It’s nice to have your input, Karen 🙂
Just thinking… how would it work if peer review were a “dialogue” between authors and reviewers, a friendly chat between colleagues trying to figure out what’s good and what’s bad in a manuscript, making the most of it? Just dreaming?
At ILSI Europe, we mention the names of reviewers of our Report Series. Works well.