Bye bye, Impact Factor…

There are some interesting quotes from the UK’s University and Science Minister David Willetts in an article on science policy from the Times Higher Education (THE). Mr Willetts has been seeking to allay researchers’ ‘common anxieties,’ particularly those relating to the measurement of ‘impact.’

The UK Government hands out money to its higher education funding bodies, which distribute that money according to the results of the Research Excellence Framework (REF), which will be completed in 2014. Traditionally, the predecessor of the REF (the Research Assessment Exercise) measured ‘impact’ of research by counting numbers of publications in high impact factor journals. Mr Willetts seems to be saying that the Journal Impact Factor will not play a role in the REF:

“Individual universities may have a different perspective on the journals you should have published in when it comes to promotion and recruitment, but the REF process makes no such judgements.”

Reaction on twitter seems to be a little bit sceptical of such blue sky thinking–after all, the REF is administered by individuals who no doubt will look at a journal’s impact factor–but it’s heartening to see the will is there, anyway.

The Government appear to be treading a very tricky path, actually. Mr Willetts also said that university departments should “look beyond publication in a peer-reviewed journal as the be all and end all of academic life.” This is likely to make academics gnash their teeth, as the last thing they want to do is scrabble to describe their research in some kind of media-friendly distortion of the truth. Reading the THE article, it seems that Mr Willetts is encouraging researchers to be innovative in the way they define ‘impact,’ but maybe that’s just me being cynical.

To balance that cynicism, it’s encouraging to note that Mr Willetts seems to be taking seriously the concerns of researchers across the board, referring to a recent report from Science is Vital (a campaign I’m closely involved with):

“Given the system cannot provide the number of senior academic positions desired by the pool of young researchers we have, should a long-term research career remain the automatic assumption for graduates entering PhD courses or postdoctoral positions? If not, how should we prepare them for life beyond academia?” he asked.

Mr Willetts certainly appears to be listening to researchers, and taking their concerns seriously. He also said that the European ban on embryonic stem cell patents would not adversely affect funding for such research in the UK. The THE article is worth reading in its entirety.

Tags: , , , , .

Filed under Funding, Policy.

One comment

  1. ferdinando boero says:

    The things I will say here have been said many times (and not only by me), but it is worth while repeating them.
    There is a site for top Italian Scientists:
    http://www.topitalianscientists.org/Top_italian_scientists_VIA-Academy.aspx
    and the inclusion in it is based on H-index and IF. To be a top scientist one must have an H index higher than 30. They are more than 2000 but there is neither a single zoologist (maybe one will get there soon, but just one) nor a single botanist. There are lots of medical doctors and lots of molecular and cell biologists. Just a couple of ecologists. This means that there are not only top scientists, there are also top disciplines. If you are good you work in medicine or in molecular or cell biology. The rest is crap. Can we really afford such a narrow focus on the life sciences? Is this evaluation really a good one? Paradoxically, we invest millions in the exploration of biodiversity, but then, as I continuously say, taxonomy is dying because taxonomic journals have low IF. And since there are few taxonomists, the citations are few, and so the H indexes are not so high. Catch 22. Are we sure that these “traditional” disciplines have exhausted their thrust? If biodiversity is important, how can we study it without taxonomy? I have also remarked that there are other measures of the importance of journals, besides the Impact Factor. For instance, there is the Cited Half Life. It measures for how long papers published in a given journal keep being cited. The higher the IF, the lower the CHL, usually. And vice versa. >10 years equals to infinity, in terms of CHL. Which means that if you publish in that journal, your paper will be cited for ever. Which is surely the case, if you publish the description of a new species (well, not really, because if taxonomists disappear, then nobody will cite you). The disciplines with high IF journals have taken over, and do not care about CHL (since theirs are small, so it is better to hide this measure and enhance the IF). A solution might be to keep the IF but consider also the CHL, for instance multiplying the IF by the CHL. Of course, if you multiply a little number for infinity, you still have infinity. And this would lead to the disappearance of disciplines with low CHL. This would be as foolish as what is happening now, of course, looking just at the IF and at H indexes.
    The IF was established as a marketing strategy, to avoid indexing journals that were not widely read, or journals that had no market for the firm that gave the IF (the institute for Scientific Information). Museum journals, for instance, received no IF simply because the Zoological Record filled the customer niche already, and there was no market to sell information to systematists. The choice contributed to push taxonomy towards extinction. I bet there are other disciplines in the same state. This has generated a large bias in the way we conduct scientific research. Governments say that biodiversity is important, but the scientific community says that the research on it is not important (due to low IF). Evidently there is a mismatch between the priorities of society and those of the scientific community.
    I concur that we have to measure in some way what we do, and I know that there is good and bad taxonomic research. But the IF alone is not objective and it does not only exclude “bad scientists” it also defines “bad disciplines”. When I consider the issue from this point of view… I smell scientific racism and prejudice.
    By the way, Gen Bank is probably full of precise sequences that are ascribed to misidentified species. The authors with high IF do not produce good science if they do not know how to distinguish species! The precision is high (beautiful sequences), but the accuracy is low (ascribed to incorrectly determined species). Furthermore, if a wrong identification is validated by a deposited sequence, then we have the premises for a pre-Linnean Babel tower. And the wrong names become right!
    I know the case of taxonomy as a victim of the IF, but there can be others. Maybe the readers of this forum can increase the list.