Retraction index

In case you haven’t seen it already, there’s an intriguing new article in Infection and Immunity, Retracted Science and the Retraction Index. It’s by the Editor in Chief of Infect. Immun. Ferric C. Fang and Arturo Casadevall–Editor in Chief of mBio and long-serving F1000 Member in Medical Microbiology.

Fang, with Casadevall, looked at retractions in (from?) his own journal and also reviewed retractions from a Committee on Publication Ethics (COPE) survey of Medline, coming up with their own “retraction index”. Interestingly, while the rate of retraction is increasing, it’s not clear whether this is due to increasing disease or diagnosis–I’m reminded of this ADHD map:
ADHD in the US
And, somewhat hearteningly, around 40% of retractions seem to be down to simple mistakes or something wrong with the reagents.

But the meat of the article is the “retraction index” part. There is “a surprisingly robust correlation” between this index and journal impact factor. The authors suggest “the probability that an article published in a higher journal will be retracted is higher than that of an article published in a lower impact journal.”

Although the link is still correlative, this does raise questions about the publication process (and sheds a very harsh light on bibliometric measures of ‘worth’). The authors, refreshingly, don’t see this as a failure of the peer review process, saying

When a prominent article is retracted, a common refrain is, “Why didn’t the reviewers catch that?” In fact, many would-be retractions are caught during the review process. However, without access to raw data, it is unrealistic to expect that even careful and highly motivated reviewers can detect all instances of falsification or fabrication.

This one is going to run for a while, I think, and maybe I should count up retractions of F1000-evaluated articles, too. I’ll let you know what I find.

previous post

Veronica Ravnikar

next post

Fixing peer review

8 thoughts on “Retraction index”

  1. Mahboob says:

    This may be also because the papers published in higher journal claim to have made a bigger and more remarkable discoveries, whereas lower journals publish papers with true modest findings.

    All comes down to publication bias….

  2. I don’t think, at this stage, you can say it’s ‘all’ down to that. An hypothesis that should also be considered is that that higher impact journals are better at investigating and withdrawing suspect publications.

  3. People might also be interested in http://pmretract.heroku.com/journals a site that Neil Saunders put up that is actually tracking live retraction information from PubMed. Details on how it was pulled together are here: http://nsaunders.wordpress.com/2010/12/22/monitoring-pubmed-retractions-a-heroku-hosted-sinatra-application/

    1. Lovely, thanks Cameron. It would be nice to see those figures as a rate rather than an absolute number though. Also, I’d be interested in discpline differences.

  4. Irene Hames says:

    Richard, it’ll be interesting to see what you come up with when you count the retractions of F1000-evaulted articles.

    Also, it would be very helpful to readers of the blog and subscribers to F1000 to have details set out here and on the main web site on how F1000 monitors for retractions of articles it’s evaluated and what it then does. I think notes are added (I can’t access evaluations to check), but do you have links through to the retraction notices? Unfortunately, not all those are freely available (COPE’s Retraction Guidelines http://www.publicationethics.org/files/retraction%20guidelines.pdf recommend that they should be and I personally don’t think there is any excuse for all publishers not to do this ), and even if they are, details are sometimes minimal or totally lacking, but it would still be good to have the links.

    1. Thanks for commenting, Irene. You raise good points.

      I’m going to have to respond to them properly in another post, but we do monitor for retractions, although it’s not straightforward, and we note retractions on the evaluations pages. We do link through to the notices: here’s a free example. We also have a total of 80 retracted articles on our books, but I’ll be analysing that in more depth and making comparisons once I get the raw data that Fang and Casadevall used.

      We used to remove retracted articles entirely, but changed our policy in March 2009.

  5. Irene Hames says:

    Many thanks, Richard, for clarification and the free example. See it’s another one of JBC’s minimal explanations: ‘This article has been withdrawn by the authors.’
    http://retractionwatch.wordpress.com/category/by-journal/jbc-retractions/

  6. Indeed. My personal view is that JBC isn’t that great a journal, editorially–my own experience of publishing there (admittedly many years ago) took me by surprise. I might blog about it on my personal site.

Legacy comments are closed.

User comments must be in English, comprehensible and relevant to the post under discussion. We reserve the right to remove any comments that we consider to be inappropriate, offensive or otherwise in breach of the User Comment Terms and Conditions. Commenters must not use a comment for personal attacks.

Click here to post comment and indicate that you accept the Commenting Terms and Conditions.