All articles published in Public Library of Science (PLOS) journals that have been recommended in F1000Prime – more than 3000 articles – now include the F1000Prime recommended badge and score. PLOS have added F1000Prime scores to their sophisticated suite of article-level metrics (ALMs), which helps give authors of important articles more information on the impact of their work and its recognition by F1000 Faculty Members.
ALMs offer a rapid and broad view of the reach and impact of research articles through an ever-increasing range of metrics. These metrics include traditional citation data, through to article downloads, discussions on science blogs, and usage of articles on a variety of social media and online bookmarking services.
An example of a PLOS article’s ALMs including an F1000Prime score is pictured. You could find all of the PLOS articles our Faculty have recommended by searching F1000Prime. You could also use the PLOS ALM data set, or, for the more technically-minded, use the ALM application programming interface (API) to explore – and reuse – all the data in real time.
As they introduce F1000Prime scores to their ALMs, PLOS have also made some changes to how they classify article-level metrics, which gives F1000Prime recommendations a unique ‘Recommended’ category. These new classifications – a new ALM “ontology” – are explained in a paper by Martin Fenner and Jennifer Lin in the Information Standards Quarterly, which has devoted its entire summer issue to altmetrics.
The paper describes how and why metrics are now classified by PLOS as measuring if an article has been:
1. Cited (e.g. in another article)
2. Recommended (in F1000Prime)
3. Discussed (on social media, article comments, blogs or Wikipedia)
4. Saved (on social or academic bookmarking services CiteULike and Mendeley)
5. Viewed (downloaded from the journal or PubMed Central)
They also propose that this order represents the “levels of engagement” with a published article, with a citation being the most engaged and, moving down this numbered list, an article being downloaded/viewed the least (apparently only one in 70 people who download a paper will cite it).
These are useful proposals as differentially weighting metrics will be important as we try and understand more about them. Mentioning an article on the freely and publicly available micro-blogging service Twitter is much easier to achieve and less rigorous when compared with a formal citation. A citation is a more scientific – more engaged – use of an article, although this assumes everyone always reads papers they cite. We could argue that being recommended by F1000Prime is the highest level of engagement as we know each paper has been read, reviewed, understood and commented on by an expert scientist. These are initial ideas, but are exactly the kind of discussions we’re keen to continue with PLOS and other interested parties, as we seek to better understand, refine and increase the recognition of new, non-citation based metrics (altmetrics) and ALMs.
Indeed, there are various research questions which could be asked of PLOS’s growing corpus of ALM data, such as the association of non-citation metrics with citations. One study, although criticized and subsequently corrected, found that highly tweeted articles are more likely to be highly cited. Another study by the Medical Research Council found that articles recommended in F1000Prime attracted more citations. There is a question, however, of how much an alternative metric being associated with more citations matters. Alternative and article-level metrics are not merely trying to recreate or validate a system – of citations to articles and journals being the gold standard for research assessment – which we already know is flawed. More article-level data gives us more opportunities to ask different questions, which give a broader picture of the real impact of research.
A study published in PLOS One in 2009 (and another study in Scientometrics in 2013) found that F1000Prime recommendations were useful for identifying important articles which citations alone would miss. There are important measures of research impact we are not yet comprehensively assessing even with all ALMs combined.
Richard Smith, former Editor of the BMJ, once tried to define and measure influence of scientific publications. He proposed six levels of influence of medical research, ranging from the most important – level one – being “change in the real world”, such as how doctors treat patients, to “simply being known about”, at level six (see his editorial in the journal eCancer). Interestingly, Smith views citations and other metrics in the lower half of this scale, across levels four (being quoted) and five (being paid attention to).
There is certainly much still to be learned and improved about the way measure the impact of research, but through collaborations with partners such as PLOS – and any other interested publishers whom we encourage to contact us – we hope to make faster progress.