How many more times?

…what dreams may come
When we have shuffled off this mortal coil,
Must give us pause

Thomson, in a commentary in the Journal of the American Medical Association, reckon there ain’t nowt wrong with the Journal Impact Factor:

The impact factor has had success and utility as a journal metric due to its concentration on a simple calculation based on data that are fully visible in the Web of Science. These examples based on citable (counted) items indexed in 2000 and 2005 suggest that the current approach for identification of citable items in the impact factor denominator is accurate and consistent.

Well, they would say that.

And they might well be right, and you and I and Thomson Reuters might argue the point endlessly. But there are a number of problems with any citation-based metric, and a pretty fundamental one was highlighted (coincidentally?) in the same issue of JAMA.

Looking at thre general medical journals, Abhaya Kulkarni at the Hospital for Sick Children in Toronto (shout out to my friend Ricardipus) and co. found that three different ways of counting citations come up with three very different numbers.

Cutting to the chase, Web of Science counted about 20% fewer citations than Scopus or Google Scholar. The reasons for this are not totally clear, but are probably due to the latter two being of wider scope (no pun intended). Scopus, for example, looks at ~15,000 journals compared with Web of Science’s ten thousand. Why? The authors say that Web of Science ‘emphasized the quality of its content coverage’: which in English means it doesn’t look at non-English publications, or those from outside the US and (possibly) Europe, or other citation sources such as books and conference proceedings. And that’s before we even start thinking about minimal citable units; or non-citable outputs; or whether blogs should count as one-fiftieth of a peer-reviewed paper.

Presumably some of the discrepancy is due to removal of self-cites, which strikes me as being just as unfair: my own output shouldn’t count for less simply because I’m building on it. It’s also difficult to know how to deal with the mobility of sciences: do you only look at the last author? or the first? I don’t know how you make that one work at all, to be honest.

That aside, I think curation of citation metrics is necessary: Kulkarni et al. report that fully two percent of citations in Google Scholar didn’t, actually, cite what they claimed to. That is a worrying statistic when you realize that people’s jobs are on the line. You have to get this right, guys.

But it’d be nice if we could all agree on the numbers to start with.


Postscript

The thought of throwing out the Journal Impact Factor is, actually, a scary one (not least, presumably, to Thomson Reuters) because we’re all familiar with it. But as the REF looms and scientists’ careers and dreams are increasingly on the line, we have to ask whether we’re doing the right thing. I’m not at all convinced by the ‘Churchill’ argument—that there’s nothing better—as there’s too much at stake. Shall we chicken out, and

… rather bear those ills we have
Than fly to others that we know not of?

Tags: , , , , , .

Filed under Indicators, Journals, Literature, Metrics.

3 comments

  1. JenJen says:

    Unfortunately, this is only going to get worse as we’re busy creating multiple homes for articles so the “same” item is going to be able to be cited in the original journal, in PMC, as a preprint in the author’s institutional repository etc. The situation we have now generally only has one version of each item and it’s already a mess. I tried to do an analysis of citations in WoS and Google Scholar (before we even had SCOPUS at my inst) and I threw in the towel. There was about a 60% overlap between GS and WoS, but about 40% were in one but not the other. Reasons were all over the place – incorrect text parsing in GS, slow indexing in WoS. Sloppy citations of course underlie a lot of it.

  2. rpg says:

    Indeed. I’d like to say that the DOI would help, but I even had problems finding DOIs for those JAMA articles, so… no, it won’t.

  3. Pingback: The dawning of the age of article-level metrics « Faculty of 1000