“Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable.”
Well I’m really heartened to see scientific debate progressing as it should do. A friend sent me the original article [1] and I also came across the popular science version [5] when I was looking around for background to form my opinion. I was a little shocked when reading the summary:
Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.
Surely this can’t be right – plenty of people seem to ‘like’ the fact that science is being brought up short – but in reality this seems to me to be a celebration of the scientific method. I certainly agree that a lot of science is partial, that pressures exist which mean longitudinal randomised testing cannot be undertaken, that bias may be introduced (we are only human), that participant recruitment is difficult, and many other constraints exist too. Certainly, I think of my work as being partial, non definitive and part of an informal collaboration of scientist within my domain – it’s a step on the road. Indeed, this thinking is why I strongly support open data, methods, and third party re-testing.
Back to the paper, Ioannidis and his team perform meta analysis of published work looking for errors, and they find them. Now there is discussion around their analysis [2,4] and there are also some responses to this in the debate [3], I highlight these because there is no mention in ‘the Atlantic’ of descenders and the original title seems created to get attention as opposed to be proportionate. While, the analysis is interesting the part I find most interesting is the following quote [5]:
Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable.
So in summary, of the 49 articles selected 14 where retested but in some way and to some measure failed reproducibility. So fourteen falsifiable articles – fulfilling Popper – where retested and found to be incorrect in some why. But this is fine, this is what is supposed to happen, the scientific method realises scientists are people and that their work can be analysed or collected incorrectly – the reproducibility of results and the ability of hypotheses to fail a test is the key strength of science – we are not required to believe anything we are told – we are only required to reproduce the described results – or fail in the trying. Now granted there may be lots of major mistakes or inaccuracies – but there seems to be no discussion as to the severity of these; data or analysis may be wrong, but how much things are changed is not stated.
So what does all this mean? Well for me I’m happy knowing someone is performing meta analysis of the work and I’m also happy that the science published is amenable to refutation, I’m please the data sets are available for testing, and I’m pleased the scientific method is working and that I’m not required to believe anything – just test everything. It seems to me you are be able to make your own decisions – the data and the method await.
References:
- Ioannidis JP (2005). Why most published research findings are false. PLoS medicine, 2 (8) PMID: 16060722
- Goodman S, & Greenland S (2007). Why most published research findings are false: problems in the analysis. PLoS medicine, 4 (4) PMID: 17456002
- Ioannidis JP (2007). Why most published research findings are false: author’s reply to Goodman and Greenland. PLoS medicine, 4 (6) PMID: 17593900
- Steven Goodman and Sander Greenland (2007). ASSESSING THE UNRELIABILITY OF THE MEDICAL LITERATURE: A RESPONSE TO “WHY MOST PUBLISHED RESEARCH FINDINGS ARE FALSE” Biostatistics Working Papers. Working Paper 135., Biostatistics Working Papers (135) Other: http://www.bepress.com/jhubiostat/paper135/
- David H. Feedman (2010). Lies, Damned, Lies, and Medical Science The Atlantic (November) Other: http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/
Pingback: Why Most Published Research Findings are False – Or Are They? | Universidad y Ciencia | Scoop.it
Pingback: Why Most Published Research Findings are False – Or Are They? | Probabilidades, Estadística y Universidad | Scoop.it
Pingback: Why Most Published Research Findings are False – Or Are They? | Communities Extraction in Social Learning Environments | Scoop.it