…what dreams may come
When we have shuffled off this mortal coil,
Must give us pause
The impact factor has had success and utility as a journal metric due to its concentration on a simple calculation based on data that are fully visible in the Web of Science. These examples based on citable (counted) items indexed in 2000 and 2005 suggest that the current approach for identification of citable items in the impact factor denominator is accurate and consistent.
Well, they would say that.
And they might well be right, and you and I and Thomson Reuters might argue the point endlessly. But there are a number of problems with any citation-based metric, and a pretty fundamental one was highlighted (coincidentally?) in the same issue of JAMA.
Looking at thre general medical journals, Abhaya Kulkarni at the Hospital for Sick Children in Toronto (shout out to my friend Ricardipus) and co. found that three different ways of counting citations come up with three very different numbers.
Cutting to the chase, Web of Science counted about 20% fewer citations than Scopus or Google Scholar. The reasons for this are not totally clear, but are probably due to the latter two being of wider scope (no pun intended). Scopus, for example, looks at ~15,000 journals compared with Web of Science’s ten thousand. Why? The authors say that Web of Science ‘emphasized the quality of its content coverage’: which in English means it doesn’t look at non-English publications, or those from outside the US and (possibly) Europe, or other citation sources such as books and conference proceedings. And that’s before we even start thinking about minimal citable units; or non-citable outputs; or whether blogs should count as one-fiftieth of a peer-reviewed paper.
Presumably some of the discrepancy is due to removal of self-cites, which strikes me as being just as unfair: my own output shouldn’t count for less simply because I’m building on it. It’s also difficult to know how to deal with the mobility of sciences: do you only look at the last author? or the first? I don’t know how you make that one work at all, to be honest.
That aside, I think curation of citation metrics is necessary: Kulkarni et al. report that fully two percent of citations in Google Scholar didn’t, actually, cite what they claimed to. That is a worrying statistic when you realize that people’s jobs are on the line. You have to get this right, guys.
But it’d be nice if we could all agree on the numbers to start with.
The thought of throwing out the Journal Impact Factor is, actually, a scary one (not least, presumably, to Thomson Reuters) because we’re all familiar with it. But as the REF looms and scientists’ careers and dreams are increasingly on the line, we have to ask whether we’re doing the right thing. I’m not at all convinced by the ‘Churchill’ argument—that there’s nothing better—as there’s too much at stake. Shall we chicken out, and
… rather bear those ills we have
Than fly to others that we know not of?