What is impact?

Last week the San Francisco Declaration on Research Assessment (DORA) launched, with Faculty of 1000 as one of the original 75 scientific organizations to sign the Declaration. DORA is a laudable initiative and anyone with an interest in improving the way we assess the quality and impact of research should read and, if you agree with the proposals, sign it. As DORA states, it is imperative that scientific output is measured accurately and evaluated wisely.

The migration of science and science publishing to the web has given us a wealth of new tools to measure usage of published papers – and other products of research such as datasets and software – that are not based on citations. These usage measures are encapsulated in the growing ‘altmetrics’ landscape (for a summary see). F1000Prime recommendations, which provide a machine-readable star rating of papers along with a human-readable comment, are an established non-citation-based metric. An increasing number of publishers and publishing services use our data – including Altmetric, in which F1000Prime is a distinct measure. And we frequently offer F1000Prime data freely for research on metrics.

Many journals and publishers, including F1000Research, are developing collections of article-level metrics (ALM) tools. ALMs measure usage and impact of specific papers, and can include citations, although simply displaying article download information is a form of ALMs. Indeed, the first commercial open access publisher BioMed Central, founded by F1000 Chairman Vitek Tracz, has always made article download statistics available.

Importantly, DORA is not just about derailing the Impact Factor. The Impact Factor is not a completely meaningless metric, but it needs to be used appropriately. Studies have shown the Impact Factor to have some value in assessing the quality of journals. Where it falls down – badly – is in the judgement of individuals and individual papers. This is what DORA is about – using better more appropriate tools to judge impact, and using these tools to give us a better understanding of the true value of the research that the tax-paying public largely fund. New, non-citation-based metrics are not perfect either, but they greatly enrich the data we can use to better understand scientific impact. Put another way, all measures of research impact have limitations – and appropriate and inappropriate uses – and unknowns. We should recognise the limitations of all metrics and our own knowledge, drop the ‘alternative’, and just call them research metrics. We should also recognise that research metrics are often surrogates for impact and influence that can be more difficult to measure. I’ll illustrate this last point with two examples:

The most accessed paper of all time in the open access Journal of Medical Case Reports, a journal I used to be the publisher of, is about a sexually-sustained injury – with gruesome images included. It’s been downloaded, at the time of writing, nearly 40,000 times. Is this impact? It depends. Readership – of which downloads are a reasonable proxy – is undoubtedly important. But given that, as an open access article, it’s fully exposed to search engines it seems unlikely scientific usage is driving this high readership figure.

Another paper, published in the Lancet in 2004, reports a randomized controlled trial which proves that prophylactic treatment with an affordable antibiotic helps prevent the spread of opportunistic infections – and ultimately, deaths – in children with HIV. This is, unequivocally, impact. However, it’s not something we can measure with a blunt instrument like the number of times the paper is cited or tweeted. Context, which we aim to provide with F1000Prime, is as important as numerical values.

These two hastily selected examples aren’t intended to undermine the importance of metrics. Metrics are very important. They decide who gets hired and promoted, help us judge the value of research that’s funded, and what new research gets funding. However, it’s important to recognize that impact measures – whether citations, downloads, tweets or shares – shouldn’t be confused with the kind of scientific impact that matters most.

previous post

Migratory birds and negative results

next post

What regulates our bodies?

3 thoughts on “What is impact?”

  1. Ben Arnold says:

    How about Patents, maybe the highest impact of any publications??

  2. “The Impact Factor is not a completely meaningless metric, but it needs to be used appropriately. Studies have shown the Impact Factor to have some value in assessing the quality of journals.”

    Really? The study (singular) cited showed that in medical journals impact factor was highly correlated with physicians’ ratings of journal quality. Well, yes, because physicians mistakenly think the IF means something and erroneously believe journals with a high IF are better.

    But if we use some of out most basic scientific criteria and critical thinking skills we realize the (1) some types of publications within journals, such as letters and commentaries, are used to count citations (the nominator), but do not themselves count as “papers” (the denominator), and hence inflate the journal’s IF, (2) the IF depends on the number of references, which differs among disciplines and journals, (3) the inclusion of journals in the database depends solely on Thomson Reuters, a private company, and not on the fields’ practitioners, (4) the exact IF published by Thomson Reuters cannot be replicated using publicly available data, (5) the distribution of citations/paper is not normal, so at the very least the mode or median ought to be used instead of the mean, (6) the 2-year span for papers followed by one year for citations is completely arbitrary and favours high-turnover over long-lasting contributions, (7) journal editors can manipulate and artificially inflate their IFs, and (8) . the relationship between paper quality and IF is weakening, so the IF is losing its significance as a measure of journal quality.

Legacy comments are closed.

User comments must be in English, comprehensible and relevant to the post under discussion. We reserve the right to remove any comments that we consider to be inappropriate, offensive or otherwise in breach of the User Comment Terms and Conditions. Commenters must not use a comment for personal attacks.

Click here to post comment and indicate that you accept the Commenting Terms and Conditions.