This is a piece I wrote in 2016 and updated slightly in 2019. While the details may be outdated, I feel that, so far, facts have shown how its background criticism remains justified more than ever.
The first big science project dates back to the times of WWII, and was the famous Manhattan Project. As is well known, this was a US-led research project, with some participation of other nations like the UK and Canada, that aimed at the construction of the first atom bomb, which was used later on Hiroshima and Nagasaki. Was it a success? In a certain sense it was, since it obtained the desired result and put an end to WWII. But the Manhattan Project came into existence in an atmosphere of war, fear, and distrust, leading to a huge loss of lives making it clear what a horror the nuclear holocaust could be. Nobody today takes this as an example to justify funds for projects.
Shortly after the Manhattan Project, the international community launched a large-scale research study aimed at obtaining a controlled nuclear-fusion reactor (the type of nuclear energy that makes stars burn) that was supposed to save us from future energy crises. But after more than half a century it remains unclear if it is possible even in principle to build one (nobody knows how to build the chamber that must efficiently contain the hot plasma without a risk of meltdown).
In the 1960s we had the Apollo Project, and about 10 years later astronauts were sent to the moon. But today, almost half a century later, everyone realizes that it was only about the cold war and politics, certainly not about science and the wellbeing of humanity as a whole. And, frankly, where is the ‘giant leap for mankind’?
How many remember that, more or less about the same time, former US President Richard Nixon announced to the world that the ‘war on cancer’ began, financing with billions of dollars research against the ‘disease of the century’? Again, after half a century, despite some progress, cancer remains the lethal disease of the new century too. While the death rate is declining steadily, after all these enormous investments in cancer research and therapy, the net result is very disappointing when compared to initial expectations. It is still a matter of dispute as to whether this lower mortality rate is due to the effectiveness of therapies (which are frequently plagued by heavy collateral effects, sometimes deadly themselves) or whether it must be credited to other factors, such as the decline in smoking, the use of early detection tests and the utilization of frequent screenings.
And what about the space shuttle project? It was supposed to become a cheap and reusable space transportation system. Instead, it has turned out to be a bottomless pit. And despite the clear evidence coming from previous historical experience that robotic space exploration produced much better results of scientific interest, and for much less money, than sending humans into space, the international community nevertheless pursued the launch of the International Space Station (ISS). It is an impressive piece of space-engineering achievement, as a great YouTube video channel where millions can contemplate the Earth from space. Perhaps this turned out to be its greatest gift to humanity, indeed. The real contribution of the ISS was not scientific but psychological: It allowed all humans to admire the Earth live in 4K resolution from space and transmit a glimpse of that sense of unity of Nature and humankind that several astronauts talk about. But we should remember that it was advertised for its potential, such as for the production of new medicines and material science, but so far not much has come out of it.
In March 2000, former US President Bill Clinton made another great announcement, this time about the mapping of the human genome. Billions upon billions of dollars were invested in order to open humanity to the ‘genetic personalized medicine’. This was the promise. A dozen years later the widespread consensus is that the human genome project was quite disappointing. It turned out that ‘life is complicated’, since our cells are much more complex than we suspect. Therefore, any hope of healing genetic diseases remains as far off as ever.
And it is now for at least three decades that we have been hearing about the coming age of a bio-engineering and agrarian revolution that would save us from genetic diseases and feed a world plagued by overpopulation [why this is a naive assumption I described in my trilogy on climate change, global overshoot and overpopulation here.] But, while we are still waiting for some sort of ‘personalized medicine’, people in the so-called third-world countries continue to starve. Genetic engineering and the application of genetically modified organisms in the food industry remain a controversial topic more than ever, and, among fears, ethical concerns, and lack of real progress, continue to raise skepticism. After the first cloning in 1998 of the sheep Dolly, the world was thrilled by the prospects of big science medicine, in particular by the growth of stem cells with the promise to grow human organs as transplants. What happened to the radical breakthroughs? Much was promised, but as of 2019 not much was delivered. As biomedical engineer Professor Michael Sefton put it, they had been 'hopelessly naive', since 'organs are immensely complex.'
Recently, stem cell research has been overshadowed by the advent of yet another big science project: CRISPR (clustered regularly interspaced short palindromic repeats), a genome-editing technique that allows researchers to alter DNA sequences and modify gene function. CRISPR has inspired hope about its potential applications, from gene therapy to the improvement of crops. It looks somewhat more promising than previous similar attempts. But CRISPR won't fully fix sick people anytime soon either.
On top of that is a growing awareness of how bad science is determining not just some scientific outcome but the lives or deaths of millions. A nice example of that is the account of Richard Harris, an American biomedical scientist, in his book “Rigor Mortis”, which describes how American taxpayers spend about half of the $30 billion in annual funding for biomedical research on studies that can't be replicated due to poor experimental design, improper methods, and sloppy statistics. Morris describes a dysfunctional biomedical system in which good scientific criteria and rigorous methods have been replaced, much too often, by procedures that once would have been regarded as inexcusable but that nowadays are increasingly becoming the norm. What once was called the ‘scientific method’ is becoming “an illusion of progress by wrapping incremental advances in false promises”, as expressed by Sabine Hosselfelder, a German theoretical physicist at the University of Frankfurt who is also quite renowned for her criticism of how modern particle physics is managed and pursued. Big science initiatives are no longer about creating enlightenment but, rather, excitement. Most of the money goes into producing papers with exciting but ultimately empty headlines. This self-sustaining multimedia hype-cycle creates research-bubbles which, sooner or later, become unsustainable and destined to burst. Meanwhile, politics is all too happy to jump into this self-sustaining circus and talks about international competitiveness to keep the money flowing. Yet the hard facts on the ground are lacking. In most cases, for these big science projects, tangible progress is difficult to see and real breakthroughs are not coming. The irony is that most scientists are well aware of this but prefer to keep going on. After all, they make a living out of it, and it is hard to escape the instinct of self-preservation. Additionally, those who find themselves too annoyed by it simply quit, as I did myself. The ones who survive are those who adapt best and are more prone to accepting the state of affairs. Many also like to convince themselves that the research is so big and so complex, it is a quite natural thing that requires more time. To reach the goal, we need another two or three decades. After that time has passed, they will retire and a new generation of scientists will take the lead, repeating the same argument as a mantra to justify another three decades of the same research with the same methods and the same mindset.
However, all this is not only a consequence of humanity’s selfishness or bad faith.
The universe is revealing to our research, and upon closer inspection, an ever-increasing complexity that quickly is escaping the grasp of the analytic mind. Even our personal life, which is manifestly influenced by the very same scientific and technological revolution, has become increasingly complex to such a degree that it is unlikely it will remain controllable for a long time. At some point a mental civilization that drives itself towards an ever-increasing complexity is doomed to collapse, or a relapse.
And yet, the lesson has still not been learned. Now we hear about other projects similar to the human genome mapping. For instance, the EU was willing to pump more than a billion euros into the ‘human brain project’ (HBP), which supposedly had to involve hundreds of researchers, from 135 partner institutions in 26 countries. In the words of its official website, it proposes to integrate everything we know about the brain into computer models and use these models to simulate the actual workings of the brain. Ultimately, it will attempt to simulate the complete human brain. The project aimed at building a full computer model of a functioning brain to simulate drug treatments. [How did it go? Not well.] On the other side of the ocean, the BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies) was announced by former US President Barack Obama administration on April 2, 2013, with the goal of mapping the activity of every neuron in the human brain and projected to cost more than $300 million per year for ten years.
It seems that nobody recalls how, in the 1980s, the Japanese government had launched a similar project named ‘Fifth Generation Computer Systems project’ aimed at building AI on massively parallel computer platforms, but it soon turned out that it could not meet expectations. Intelligence’s nature and workings are much harder to decipher than previously thought. Will modern supercomputers be more successful? There are good reasons to doubt that, and several neuroscientists have, in an open message to the European Commission, already criticized the HBP as highly 'controversial and divisive without transparency', with a call to eventually 'redirect the HBP funding to smaller investigator-driven neuroscience grants'. Only after six years of its inception, most neuroscientists agreed that the HBP hasn’t lived up to its promise.
However, most of the attention moving around the AI sector is focused on self-driving cars. A vision of a future in which driverless cars and futuristic robotic automotive transportation systems dominate our daily lives is presently the dream hypnotizing the public thanks to a large media campaign that celebrates the supposedly great breakthroughs – and that, of course, the present industrial scene takes advantage of. Billions of dollars in R&D are spent and the world’s largest corporations, such as Google and Apple, are betting everything on this emerging technology. Again, thousands are employed as raw working labor to realize this goal in a concerted effort inside a huge industrial and managerial think tank environment. One wonders: Will it be worthwhile? The answer obviously depends on the success or failure of this line of research. If it will be successful, fine. However, the initial enthusiasm is now fading because we are slowly but steadily realizing that driving a car is not at all the kind of mechanical task a machine can perform. Rather, it requires knowledge and the ability to predict human behavior, which, obviously, only humans possess. When several accidents occurred, some with causalities, and received a significant amount of media attention, things became even worse. It is now clear that fully automated driverless cars are not to be expected soon.
If children, in their families and/or schools, were allowed to learn to look inside themselves instead of always being forced to externalize their consciousness, becoming aware of how their thoughts and feelings manifest and of how their own brains work, as adults they would have no issue with realizing that the cognitive functions at work, while one drives a car, have nothing to do with the kind of functions AI processes mimic (deep learning neural networks, etc.) and are supposed to lead us to build autonomous vehicles. It is not a matter of analytic thinking and scientific knowledge; it is a matter of knowing themselves. This is something most of us have never learned (or were taught to forget and ignore) – a fact especially true among scientists who were trained with empiric externalizing thinking.
The next big things that run in parallel with self-driving cars nowadays are the so-called 'quantum computers'. As the name indicates, quantum computers take advantage of quantum physical processes that, in principle, would allow them to solve certain kinds of problems that ordinary computers can't tackle or can tackle only with much longer processing times. The idea is not new; it dates back to Richard Feynman, the famous American physicist and Nobel laureate, who in 1982 showed how conventional computers could, in principle, be outperformed by a hypothetical universal quantum simulator. However, only in recent times has the technology become mature enough to encourage scientists and investors to consider its practical realization. The hype surrounding quantum computing spread fast when, around the mid-end of the 1990s, theoretical progress was made and IBM began working on it. It also spread about ten years later, when Google jumped into the race. This is, again, another technology that was – and still is – supposed to change the world and all our lives but that, so far, has not. After over two decades of intense R&D and significant funding, the results are disappointingly far from the originally predicted ones. Google announced that it would reach 'quantum supremacy' – that is, the realization of a 49-qubit (quantum-bit) quantum computer device capable of solving at least one problem that a classical computer cannot – by the end of 2017. In fact, in March 2018 the company realized a 56-qubit quantum computer which, however, could not perform any computation more efficiently than conventional IT could. Quantum bits are plagued by noise which cannot be easily eliminated. Now scientists speak of thousands, if not millions, of qubits necessary to build such a machine, which means it will not become a reality anytime soon. The more time passes by, the more it becomes clear that the technical challenges to overcome are much more complex than previously expected. Moreover, there is growing skepticism over whether quantum computing will be able in practice, or even in principle, to outperform classical computation devices. It remains to be seen whether quantum computers will ever live up to the hype. What is certain is that the speed of development and the pace of the technological advance of quantum computers is far slower than that which produced the good-old PCs we work with nowadays. The fact is, again, the expectations that were so excitingly announced years, if not decades, earlier were not met and the number of skeptics who express doubts regarding the practical realizability of quantum computers is growing.
This was only the first part of the list of unsuccessful big science projects. The second part will list several more and draw some conclusions.
Support my research…