Numbering the watchmen
17 March, 2011 | Richard P. Grant |
|
|
A couple of weeks ago we had Liz Allen and Claire Vaughan talking about the way the Wellcome Trust assesses its own funding impacts. We’ve been a bit more directly involved with the Medical Research Council (MRC) and how they’re assessing their own grants; working with them to match papers arising from MRC-funded research to F1000 evaluations.
While both citation impact and F1000 evaluations are subjective, it is worth noting that the F1000 score is based on a positive recommendation by named, hand-picked scientists who are not the authors. This mitigates against the problems of self citation and negative citations.
The report is now available. It doesn’t come as a great surprise to read, like the Wellcome before it, the MRC study finds that F1000 evaluations (strongly) predict citations. Now, citation counts are a crude and somewhat problematic measure of ‘impact’, but at least everybody is familiar with them, and it’s somewhere to start. And it’s gratifying to see F1000 come out looking so good in this analysis too.
There does seem to be a slight error in the report: “Conversely, between 20% and 25% of eligible papers from these [Nature, Science and Cell] are not recommended by members of Faculty of 1000 Biology.” It’s a little difficult to get good numbers for Nature and Science, because the proportion of non-biological papers in those journals is indeterminate. About 40% of all Nature research papers get evaluated by F1000, and ~30% of Science papers. However, assuming that all research papers in Cell are eligible (it’s a biological journal), 46% were evaluated in 2008, making the 20-25% figure an underestimate by a factor of 2.
It is clear that those papers chosen for evaluation by faculty members do subsequently accumulate a high citation impact. This suggests that those papers selected by F1000 faculty may well be those to watch in future, and we should be interested in MRC attributed papers which gain a high evaluation score, and attract a number of independent evaluations from F1000.
The authors of the report criticize our labels (currently ‘Changes Clinical Practice‘, ‘Novel Drug Target‘, ‘Technique‘, ‘Clinical Trial‘ and ‘Review‘). They say, F1000 have not set out clear objective criteria for the use of these categories, and they are therefore used inconsistently by faculty members. This is fair comment. These categories have been in a state of flux for the two years I’ve been here, and we’re thinking of expanding them again now. The authors go on to say. we would like to continue to work with F1000 to select criteria that will be helpful to the research community in finding literature likely to make a difference to their field and of course that meshes with our own aims.
The hardest thing of course is knowing what categories should used be at F1000, and we welcome the MRC’s input. I’d also like to hear your views on the matter–what classifications would you like to see at F1000?
|
It is likely that the publicity and advertisement that F1000 gives plays a large part in getting a paper cited, possibly relying on the F100 evaluation rather than reading the original. Science is so paper-saturated that many people look to reviews or “news and views” to see what is around to cite. As in theatre, critics have an effect. It doesn’t mean they are wrong, it just tends to be a self fulfilling prophecy. I recall one rising star to watch was soon accused of scientific misconduct.
I’ve often wondered about that–how much do we affect the experiment we’re observing?