Politik

It makes much more sense in fact to publish everything and filter after the fact

so says our friend Cameron Neylon, a senior scientist at the Science & Technology Facilities Council.

I’m not going to get into that argument here, but I am going to archly raise my eyebrow at the piece in Nature whence that quote comes—Peer review: Trial by Twitter.

First, I take objection to the statement that papers are being analysed—’taken apart’ if you like—in the social media world “rather than at small conferences or in private conversation”. I’m pretty sure they’re also getting shredded in private and at conferences just as much as they always were. It’s simply that there are now additional mechanisms: blogs, twitter, etc.

Maybe that’s picky, but stay tuned because it’s going to get worse.

The Nature reporter seems to think that the world of online commenting on published articles is ‘chaos’, and needs ‘a new set of cultural norms’ and ‘an online infrastructure to support them.’ To me, that sounds like she’s scared of social media and what it can do. While supporting the need for a clear record of scientific findings (and indeed, scientific process), I happen to think that this brave, new world we find ourselves in can only enrich the process. The authors of the two papers referenced in the article can be, I think, excused from grappling with criticisms in this arena simply because it is new, and they were (rightly!) scared. Hell, I’d be scared. But that doesn’t mean it’s wrong: simply that we haven’t got used to it yet (and David Goldstein seems to take a similar view).

Anyway, we can argue about such things till the cows come home. There are more serious errors in the piece—at least from our point of view.

It is good to see people like Liz Allen at the Wellcome saying nice things about us, and that our ranking is the only ‘alternative metric’ that the Wellcome uses ‘in any systematic way’. However, I am wondering who are the critics who ‘note that F1000 rankings tend to correlate closely with traditional citations’? I would like to see their data set, because while some papers correlate well with citations (and you’ll be excused a ‘well, duh’ moment, there), there are quite surprising differences. (Yes, I know, I’ve been promising to write that paper for months now).

Another ‘duh’ moment comes here: ‘most papers never attract the attention of the faculty members, so that they are never ranked at all.’ Well, quite: one of our selling points is that, depending on the field, only 1-2% of all biology/medicine articles ever make it into F1000. And yes, the longevity paper only had one evaluation—maybe that was because, on reflection, it might not have been all that good?

You’ll notice the writer doesn’t mention the arsenic paper. Probably because that has three dissents and therefore doesn’t jibe well with the critique.

Finally, there’s a ‘very brave’ definition of the word ‘common’:

For comparison, the currently highest-ranked paper on the site has an aggregate score of 62, and scores of 20 or more are common.

There are over 100,000 evaluations on the site—a little short of 80,000 papers. I just checked, and 174 have a Factor of 21 or more (you can verify that number easily enough). About 230 have a Factor of 20 or more. That’s less than one third of a percent. Oh hum.

And finally finally, being lumped together with thirdreviewer.com? A site that hasn’t had a post since October and where half of the latest 50+ comments are anonymous?

Please. That hurt.

previous post

Gary Borisy

next post

Peter Murray-Rust on open data-Part 1

12 thoughts on “Politik”

  1. Cami Ryan says:

    I have to agree with Grant’s statement here: “…I happen to think that this brave, new world [social media] we find ourselves in can only enrich the [peer review] process.”

    You have to only look at one (of many) articles that have been written recently on the perils and inefficiencies of the peer review process – and been peer reviewed and have conducted a peer review – to know that the system can be riddled with inefficiencies and bias (see ref below). Unfortunately, journals and institutions are highly path dependent in terms of the peer review process and are resistant to change. This ‘brave, new world’ of social media and peer review represents an opportunity for knowledge to be generated, shared and adjudicated in new ways. We just have to (fearlessly) embrace it.

    “I Hate your Paper” Askt, The Scientist
    Link available at: http://doccami.posterous.com/peer-review-peer-rejected-peer-review-academi

  2. Nice wrap-up and retort!

  3. Thanks Cami, Bjoern.

    I do apologize for the snarkiness. But I got three emails from different people in the company before midday today all saying ‘hey!’ — so I’m blaming them 😉

  4. Jim Lindelien says:

    Traditional secretive peer review is accompanied by all manner of political ilk and potential backstabbing over factors having more to do with near term funding battles and personal egoism and less to do with the scientific value of the paper under review. On the other hand, an open online peer review process makes such tactics harder to hide and therefore is to be encouraged, even if it is frightening.

  5. Adam Smith says:

    I think the big issue is how to respond if you are an author. If I respond to criticism one does that mean I have to respond to the next 10 or 100 or 1000 blogs that comment on my work? Do I have the month to do the control that you think is oh so important or by then will you have moved on? What if, horror of horrors, I don’t know about you blog and don’t respond?

    A form is needed where we all know where to go and how to go about responding to post-publication review of our work. This is what I think the author means by ‘a new set of cultural norms’. Incidentally, this seems to be what F1000 is which is likely why the author highlighted it.

  6. They’re very good points, Adam. You’re absolutely right that there is an expectation that authors will respond, and perhaps—just perhaps—that expectation is unrealistic for the reasons you mention.

  7. David Crotty says:

    Perhaps the unnamed critics are Nature editors themselves, and this is what they’re referring to, as far as correlation between F1000 rankings and Impact Factor:
    http://www.nature.com/neuro/journal/v8/n4/full/nn0405-397.html

  8. Hah, thanks for that, David. It would be nice if they looked at Liz Allen’s PLoS paper in 2009.

Legacy comments are closed.

User comments must be in English, comprehensible and relevant to the post under discussion. We reserve the right to remove any comments that we consider to be inappropriate, offensive or otherwise in breach of the User Comment Terms and Conditions. Commenters must not use a comment for personal attacks.

Click here to post comment and indicate that you accept the Commenting Terms and Conditions.