"We don't even take our own observations seriously, nor accept them as scientific observations, until we have repeated and checked them." Karl Popper
For science, an experiment must give the same result even if conducted by different people in different places. But what if we suddenly realized that most of the experiments on which we rely to develop new research and new drugs were not reproducible?
It is not the beginning of a science fiction book, this is pure reality. Recently Nature, one of the most prestigious scientific journals in the world, has published an article in which it has shown how more than 70% of the scientific research examined had failed reproducibility tests1; nevertheless, they have been published, disseminated and cited by other researchers as the basis of their own experiments.
Before judging, however, we should consider that «With the evolution of science it becomes increasingly difficult to replicate an experiment because the techniques and reagents are increasingly sophisticated, expensive in time for their preparation and difficult to teach»2 , explains Mine Bissel, one of the most awarded American researchers for her innovative oncology research. The best thing, continues Bissel, «it would be to contact the colleague directly, if necessary to meet and try together to understand why the experiment cannot be reproduced. Then solve the problem amicably ».
The pharmaceutical industry also stops when the experiments are not reproducible
In 2011 Glenn Begley, at the time director of Amgen's medical oncology department, one of the largest multinational biotechnology multinationals, had decided, before proceeding with new and expensive experiments, to replicate the 53 scientific works considered as fundamental on which Amgen's future research in oncology would be based. Result? He was unable to replicate 47 of 53 scientific papers, or 89%3.
If we want to write it in another way we can say that only 11% of the scientific experiments considered as milestones in that area of research were reproducible.
«I was shocked - says Begley - these were studies on which all pharmaceutical industries rely to identify new targets in the development of innovative drugs. But if you are going to invest 1 million, 2 million or 5 million betting on an observation you need to be sure. So we tried to reproduce these published works and we convinced ourselves that you can't take anything for what it seems »4
To try to calm the waters down Nobel laureate Philip Sharp intervened explaining how «a cancer cell can respond in different ways depending on the different experimental conditions. I think a lot of the variability in reproducibility can come from here » 4.
To exclude any type of error in the reproduction of the experimental conditions, often due to problems with manual skills or the use of specific reagents, Bagely and his team have tried them all starting from directly meeting the authors of the original studies, he says «we retraced the published works line by line, figure by figure, we did the experiments 50 times without being able to reproduce those results. In the end the original author told us that he had repeated it six times but he had succeeded only once and then published only the data relating to that one time in the scientific article. "
So you are investing money on announced bankruptcies
Sand an experiment that succeeds only once is proposed to you as the basis for investing millions of dollars in research and producing a new drug, would you invest all that money?
This is the question they asked themselves Leonard Freedman of the Global Biological Standard Institute of Washington, Iain Cockburn e Timothy Simcoe of the Boston University School of Management. In recent research they estimated that the US government spends $ 28 billion annually on non-reproducible scientific work. "We don't mean - explains Freedman - that they are money thrown away, in some way they contribute to the evolution of science, but it can be said with certainty that from an economic point of view the current system of scientific research is an extremely inefficient system " 5.
It is therefore no coincidence that the first to bring out the problem of reproducibility are researchers who work for multinational companies, certainly more attentive to the budget and the yield of the investment. Perhaps it is thanks to what the list of illustrious researchers who denounce this "short circuit" is constantly growing.
Dr Khusru Asadullah, senior executive of the Bayer, said how researchers from the German multinational had failed to replicate more than 65% of the experiments they were working on to carry out new research3.
Also prof. George Robertson from Dalhouise University in Nova Scotia tells of when he worked for the company Merck about neuro-degenerative diseases and they had noticed that many academic scientific papers did not stand up to the test of reproducibility4.
In search of the causes of this crisis of science
Science is in crisis: we don't want to admit it publicly yet, but it's time to start stimulating a debate.
Among the causes of this "reproducibility crisis" there are certainly the technical issues described by Bissel, but there are also more human aspects such as the need of scientists to publish for career and receive funding, sometimes their own employment contracts are tied to the number of publications they manage to do, as he says Ferric Fangof the University of Washington «the safest ticket to take a loan or a job is to be published in a high profile scientific journal. This is something unhealthy that can lead scientists to look for sensational news or sometimes to take dishonest behavior ».
The professor intervenes even more directly Ken Kaitin, director of the Tufts Center for the Study of the Drug Develompment which states «If you can write an article that can be published, you don't even think about the topic of reproducibility, make an observation and move on. There is no incentive to understand if the original observation was by accident wrong. "
A system, that of academic research, which is evidently dragging science towards a crisis of identity and credibility. In 2009 the prof. Daniele Fanelli, of the University of Edinburgh, has created and published a study with an emblematic title: «How many scientists falsify data and make research ad hoc? "»6
Almost 14% of the scientists surveyed said they knew colleagues who totally invented the data, ed 34% said they had specifically selected the data to bring out the results that interested them.
In June 2017 the prof. Jonathan Kimmelmann, director of the Biomedical Ethics Unit at McGill University in Montreal has published a new study that confirms this crisis of reproducibility and tries to shed light on some of the main causes such as la variability of laboratory materials, problems related to the complexity of the experimental procedures, the poor organization in the research team, and the lack of critical analysis capacity. 7
Neither universities nor scientific journals are interested in reproducibility studies
You also need to consider that the academic system does not reward those who make reproducibility studies at all, are time and money thrown away from the research group's "production performance" point of view.
The same scientific journals I'm not very interested in publishing research that demonstrates the non-reproducibility of a previous published work, they prefer to publish innovative research or surprising results and so here is how easy it is to get rid of the news of the replication failures.
Ultimately you have to keep in mind that today there are research so specific that only a few experts can understand and evaluate it; in this way the peer review activity is sterilized (i.e. the review of the scientific study by experts so as to be able to decide whether to publish it, ask for clarification or reject it). In some cases there is a big risk that scientific journals publish almost blindly, like: I did not understand what you are talking about but it seems to me all serious and well done, you have a good reputation, so I publish it.
«Not for this reason we must now think that all scientific studies are unreliable - he claims Andrea Penotti, director ofInterdisciplinary Life Science Institute - one must have the strength to make a serious self-criticism in the world of science without falling into the opposite excess of the "witch hunt" which would lead to a serious crisis of credibility not only towards the general population but also towards the doctors themselves and among fellow researchers ».
The history of science has always told us of an evolution that goes through big crises: from the questioning of the geocentric system to the introduction of quantum physics. The beauty of science has always been that of knowing how to put yourself in crisis and come out more beautiful than before and often these great revolutions do not require big funding but only genuine flashes of genius and honesty.
"Putting your finger on the scourge of this crisis of credibility is vital for us who work on interdisciplinarity, the need to integrate different disciplines requires more than ever a clear comparison and makes any inconsistencies emerge more easily - Explains Andrea I thought about it on the occasion of the world congress of studies on Consciousness held in San Diego together with the linguist Noam Chomsky - pFor years science has focused on the analysis of nature's "single pieces", has dissected it in search of primordial gears. It is now necessary to rediscover the ability to connect the individual pieces studied and better understand the meaning of those processes that guide the organization and evolution of living matter. We need to return to the simplification of concepts, move from a syntactic of life to a semantics of life ».
Sources:
- Nature - 1
- Nature - Reproducibility: The risks of the replication drive
- Nature - Believe it or not: how much can we rely on published data on potential drug targets?
- Reuters - In cancer science, many "discoveries" don't hold up
- Plos One - The Economics of Reproducibility in Preclinical Research
- Plos One - How many scientists fabricated and falsify research? A systematic review and meta-analysis of survey data
- Plos One - Can cancer researchers accurately judge whether preclinical reports will reproduce?