This is the third part of an ongoing dialogue between me and Sergio Bellucci, journalist, essayist, Italian representative of Upeace - the UN University for Peace - and author of AI: A Journey into the Heart of the Technology of the Future. For those who missed the earlier parts of this conversation, they are available for reading in previous posts.
Those who read my books and articles know that I have always maintained that the real transition, the real change, the real paradigm shift must first of all happen within each of us. Everything passes first through the human being, who is demonstrating to us—not only in recent years, but for a long time—that he does not have a sufficient degree of consciousness, lucidity, and awareness to generate a widespread feeling of trust in those who have the power and the ability to make important decisions on behalf of all of us.
Artificial intelligence has great potential, like atomic energy, but if we make bombs out of it that we use to destroy each other, it's better to leave it alone.
Many of the scientists and technicians who work with AI—the same Steve Hawking you mentioned earlier—point out the risks inherent in this tool and ask for rules to be established to limit it, to manage it in a way that minimizes its possible negative effects. But who establishes the rules?
The democratic system tells us that it is the duty of politics to establish the rules of civil life. But we know well that politics today is not a method of governing people and countries free from any external influence. On the contrary, many observers define it as the instrument of multinationals, of economic and financial powers, and that its role is nothing more than to express in legislative terms the wishes of these powers. The latest US elections, with the presence of Elon Musk alongside Trump, have done nothing but make clear what has been clear to many for a long time, namely that the decision-making role is not the prerogative of the political world, but rather of the financial one.
That said, the fact remains that humanity, perhaps for the first time since its appearance, has been able to develop a tool that, if used with the best intentions, could have wonderful consequences for us, as you previously said. Even helping us to understand what our defects are, our flaws and to improve ourselves, to evolve in the true sense of the word, but at the same time it has tragic, tremendous potential.
We are still at the dawn of this technology and we are already seeing that, for example, videos are being made with AI that are practically indistinguishable from real ones and the further we go, the more reliable these things will be precisely because, as we said, AI learns from the data we provide it and therefore does its job better and better.
You are a journalist, we both work in the field of information and we know very well how crucial this is in this historical moment and how people follow and give credence to what is proposed to them by the media. If this information, which is already manipulated today, to use a euphemism, tomorrow is in the hands of artificial intelligence, we would no longer be able to distinguish the true from the false.
So we are back to the starting point; it is man who must change before moving forward. It seems to me as if we have arrived at AI too early, unprepared for this tool. The being that generated it is not able to understand it.
You hit the nail on the head. I am of the school of the old philosopher from Trier who said that humans only ask questions they can answer. This means that somehow, somewhere, in some form, humans possess the ability to govern AI.
Even though the leap we are facing is a major leap. You were talking about rules. Rules are not enough.
I often use this example to explain what I mean in this regard. Try to imagine if the day that man invented the wheel there had been a legislator who, looking at it, had regulated its use by establishing that "the wheel can only be used for such and such a function."
That rule would not have had any success because human history would have generated the use of the wheel in billions of different functions anyway, because once you have created a tool, you cannot decide the terms of its use, which is, in fact, established by custom and the needs of the moment.
Today we are faced with machines, with tools that are able to intervene in modifying the model of knowledge even in epistemological terms.
We are talking about an exponential increase in what humans have available to understand and define their own relationship with reality, which cannot be regulated. What is possible - and it would be desirable to do - is to choose work options and build models that become examples. In this sense, then, the development of these technologies could be directed towards one outcome rather than another.
Using, therefore, the Gramscian concept of hegemony instead of that of rigid rules within which to box a world that, in fact, cannot be boxed. It is an illusion to think that the world works through rules and norms. It does not work that way even for much more banal things, so much so that prisons are full of people who carry out activities considered illegal: you, the State, say "you cannot smoke marijuana!", but society continues to do so.
If the ineffectiveness of a totally regulated system is evident for such trivial things - which should be erased by the intelligence of a legislator who understands that society works differently and has its social compatibilities - you can imagine its inadequacy on much more complex issues such as this one of AI, which risks putting in the hands of very few an enormous power capable of conditioning everything.
So I believe it is necessary to immediately generate models that become hegemonic models within which humanity can recognize itself.
Totally agree; I even wrote an article about the futility of a regulated world.
Indeed.
Now the real point is that for the first time in the history of the last centuries, if not thousands of years, Europe is not at the centre of innovation processes. What does this mean? First of all, the world is much more complex than today's Europe, and therefore the Eurocentrism that has characterized us psychologically must give way to a concrete and true multipolarity (not the fake globalization that is showing its failure) of a world with its different qualities, with the differences that characterize it, etc.
But there is a fact: here in Europe, as a consequence of the two conflicts of the last century, a certain type of social organization had been generated, with models of society based on an idea of inclusive welfare, founded on the rights of people, workers, and minorities. Models that had generated a type of relationship with a certain interpretation of history. I am thinking, for example, of the reading of composition, the one given by historical materialism, therefore the classes, their becoming, the relationships, and so on. Here, the fact that this technology is instead developing substantially in the United States and in China, with completely different cultural, social, and economic environments and with different purposes from those of the European model, well, in short, it signals the end of a historical path and the beginning of a new one.
Now we don't know whether the American model or the Chinese model will prevail in the end. Perhaps in a much more concrete way, what I see—which is one of my fields of work at the moment with the activity of the University for Peace—is to avoid a process of technological divergence in the world. That is, what is happening?
The neuromorphic microprocessors we talked about are a very sophisticated technology, which until now has been in the hands of the United States. When artificial intelligence technologies had this sudden acceleration, the question arose of how neuromorphic microprocessors should be reserved for a single geographic area; China, just to be clear, should not have advanced neuromorphic microprocessors to prevent it from developing artificial intelligence systems similar to those used in this other part of the world. Obviously the decision has nothing commercial, but it is a geopolitical decision that concerns the military balance between the two worlds, and what has this led to? That for a small phase this embargo produced a differentiation, an acceleration on one side and a delay on the other.
But since, as we saw with the example of the wheel, you can't limit these developments to a few uses, China has decided to invest in producing their neuromorphic microprocessors that can compete with the American ones. Now, beyond the fact that the speed of the two worlds may be different, what is interesting to highlight is that Chinese technological development is diverging from the Western one. What does this mean?
It means that we could find ourselves in a very short time with two different technologies, perhaps not interoperable and therefore with a separation of the world that is no longer only the result of a political decision - and therefore ultimately recomposable through an open discussion table—but of having a real technological divergence. It is like producing two railways with different gauges: they are not compatible.
A difference not only in technology, but in the way of thinking.
Absolutely yes and this, in part, these technologies are already highlighting, because they work in relation to the structure of the language and since languages are different, even the writings are different, because those who write in ideograms think in a completely different way from those who write with alphabets, this difference is generating significant problems of dialogue.
So we are truly in the presence of a historical passage. It is not all as simple and beautiful as we are told.