On taking a good look at ourselves
3 February, 2011 | Richard P. Grant |
|
|
Perhaps the most distinctive and powerful thing about Science is its tendency, or rather proclivity to ask searching, even uncomfortable questions. And unlike belief systems, or ideological and political and movements, or pseudoscience, it asks those questions of itself. There’s been a fair bit of that going on recently.
An article in the New Yorker looked at the puzzling phenomenon, as yet unnamed, of seemingly solid observations becoming less reproducible over time. This article received two evaluations on F1000 (here and here), and sparked a lively discussion here on Naturally Selected.
The New York Times reported on a paper in the 4 January issue of the Annals of Internal Medicine, which claims published clinical trials don’t cite previous trials. Now, there might be trivial explanations for that, but in the same vein Sir Iain Chalmers, editor of the James Lind Library, has some harsh words for scientists. He says there are fundamental and systemic things wrong with the way research, particularly clinical research, is done today.
It’s certainly misconduct
Among these, he accuses (some) researchers of not addressing questions that are of interest to patients and clinicians, of failing to contextualize new findings, and being clear about what they’ve actually discovered. He also takes aim at the failure of scientists to publish negative or ‘disappointing’ results. In Ann Intern Med last year there was a paper, recently evaluated at Faculty of 1000, scrutinizing the reliability of and inherent bias in clinical trials. And today, Nature published a Correspondence arguing that it’s critical to publish negative results.
Interesting times. Are the criticisms Sir Iain makes fair? If so, is this fault of the scientists themselves or the system in which they find themselves working? If the latter, how can it be changed? What about publishing negative results, and reproducibility and publication bias?
Are we questioning Science enough?
[poll id=”6″]
|
I find Sir Iain’s criticisms quite disturbing. One of them might be rather easily addressed, though. If researchers are failing to investigate the problems that patients and clinicians care about, funding agencies could change the situation in one funding cycle by identifying such problems and funding research into them.
On the general point of “taking a good look at ourselves” (yourselves, actually, as I’m not a scientist), I was shocked by the article in the New York Times about the statistical methods used by Daryl Bem in his recent publication on “ESP” (Jan 5, 2011, “Journal’s Paper on ESP Expected to Prompt Outrage” by Benedict Carey – http://www.nytimes.com/2011/01/06/science/06esp.html).
I am not a statistician, but the suggestion that most clinical research uses a faulty model is really alarming. It’s one of those issues that makes me feel really vulnerable and at sea because the issues are so far beyond my training that I have to take someone’s word on it. At the moment, for me, the latest word is not at all comforting.
Correction: The URL to the New York Times article is faulty; the closing ) got stuck in and it doesn’t work. Try this instead:
http://www.nytimes.com/2011/01/06/science/06esp.html
His final comment was that he believes, “Those issues need to be tackled by the science community as a whole.” I concur, but what he says is nothing new. His entire commentary is a distillation of separate items that have been bantered about in The Scientist, in LinkedIn group discussions, and in many, many other forums for several years now.
The issue that arises is with regards to the definition of “the science community as a whole.” What does this mean? Is there such a thing,? How would implementation occur? Must changes be piecemeal, from scientific society to scientific society, journal to journal? One great place to start here in the United States would be FASEB. They have the numbers, the cloat, and a series of high impact journals.
I wish Chalmers had given an example of research that is not of interest to patients and clinicians. It is true that in science we often do what we can rather than what needs to be done. If authors have not cited previous clinical trials or point out what is truly novel, reviewers are at least as much at fault as authors. If reviewers are not holding the authors responsible for proper scholarship, then peer review is failing.
Even though it is not new, it is good to be reminded of it from time to time. Who knows, maybe one day the problems wil actually be tackled. Clinical research is teh target here, but as a non-clinical researcher, I could give many examples of the scientific misconducts in fundamental research (and I am still at the very beginning of my career).
The increasing pressure on research groups to publish high impact factor articles, to compete for relatively sparse funds, exarcerbates misconducts. As you feel pressured, you start to bend the rules more and more.
The insane competition that rules the scientific world at the moment and the time scientists have to spend on writting grant applications with <10% succes rate drive reasearchers to take shortcuts, to exagerate the importance of their contribution, to minimise previous contributions to the field and to misbehave in many other ways that were not mentionned here.
I do not believe those behaviours should be excused, but stricter publishing rules are not enough. Somewhere, the pressure has to be relieved.
The problem of the lack of independance of modern science is central. National politics and various funding agencies put pressure on scientists to search for the expectable, to neglect teaching, to produce ready-to-use science, and to all compete on the same subjects.
It not only drives some researchers to misbehave, but it also means that we miss on important discoveries, we compete rather than collaborate, we compromise future generations' training, generating an incredible waste of ressources.
Science still works – its open and public nature will correct goal-driven errors. What young scientists need to know is that goal-seeking fudging will eventually damage their reputation, and hence their prospects. Honesty is the best policy, but that alone is not enough since many errors are made unconsciously. I am reminded that most aviation accidents occur when the pilot is approaching his/her destination. The desire to get home, that is to achieve a goal, makes errors more likely.
What can help is talking to colleagues about your current work. The problem in the modern grant system is its incentive to keep everything secret until the work is published. This weakens science, but does not destroy it.
Paul Stein is absolutely right: what I say in my 6 minute podcast is certainly not new. I’ve been wittering on about these matters for more than quarter of a century (1,2). During that time things appear to have been getting worse, not better, so maybe it’s time for me to shut up!
As I don’t know what FASEB means I don’t know where Paul Stein’s example fits within my conception of the scientific community. But I doubt whatever FASEB is covers all the elements of the scientific community which need to be covered – not just researchers, but also research funders, research regulators (including research ethics committees), and editors and publishers.
Bob Hurst feels that reviewers are at least as much at fault as authors in acquiescing in failures to cite relevant previous research (3). It is true that reviewers, as well as researchers, could do a much better job than they do do (4); but so also could research funders and research ethics committees (5).
I wish there was evidence to support Kelvin Duncan’s view that that science works because it is open. There is plenty of evidence of biased under-reporting of research; that this mainly reflects researchers failing to submit reports of research rather than journal editors rejecting submitted research, and that this form of scientific, ethical and economic misconduct harms not only patients and the public, but also the process of scientific discovery (6, 7).
Bob Hurst would like an example of research that is not of interest to patients and clinicians. Several could be offered (see http://www.lindalliance.org), but an example of a particularly gross example of a mismatch concerns the questions that patients and clinicians wish to see addressed in research on the management of osteoarthritis of the knee and what researchers are actually doing (8).
All these and other factors are leading to massive amounts of avoidable waste in research (9,10), waste which, in one way or another, is being paid for by the public.
References
1 Chalmers I. Proposal to outlaw the term ‘negative trial’. BMJ 1985;290:1002.
2 Chalmers I. Electronic publications for updating controlled trial reviews. Lancet 1986;2:287.
3 Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 2011;154:50-55.
4 Jefferson T, Deeks J. The use of systematic reviews for editorial peer-reviewing: a population approach. In: Godlee F, Jefferson T, eds. Peer review in health sciences. London: BMJ Publishing Group, 1999, pp 224-234.
5 Savulescu J, Chalmers I, Blunt J. Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability. BMJ 1996;313:1390-1393.
6 Dickersin K, Chalmers I. Recognising, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the World Health Organisation. JLL Bulletin: Commentaries on the history of treatment evaluation (www.jameslindlibrary.org), 2010.
7 Chalmers I. TGN1412 and The Lancet’s solicitation of reports of phase 1 trials. Lancet 2006;368:2206-2207.
8 Tallon D, Chard J, Dieppe P. Relation between agendas of the research community and the research consumer. Lancet 2000;355:2037-40.
9 Garattini S, Chalmers I. Patients and the public deserve big changes in evaluation of drugs. BMJ 2009;338:804-806.
10 Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009;374:86-89.