Let's talk about artificial intelligence, a topic of extreme current interest that raises many doubts and many concerns and, at the same time, arouses great enthusiasm.

I am approaching this topic thanks to the expertise of Sergio Bellucci, who, in addition to being a dear friend, is a journalist, essayist, Italian representative of UPEACE—the UN University for Peace—and author of the book AI - A Journey into the Heart of the Technology of the Future.

Let's start with something apparently trivial: why do we call it artificial intelligence?

We call it artificial intelligence because of a marketing operation from the 1950s. In the book, I tell how, during that period, the mathematician John McCarthy, in collecting resources to finance his industrial development project, thought of calling it “artificial intelligence,” a definition that was very successful because it stimulated that desire, perhaps unconscious, of the world of finance to intervene in this process. And in fact, he collected a lot of money, and from that moment, the name “artificial intelligence” has become, at least in the computer world, a topos, a central element.

Over time, there have been several attempts to build intelligent models, and, outside the scientific world, there was great anticipation and excitement because it seemed that a way to develop an intelligent system had finally been found. But the models were born and collapsed with a certain rapidity, and just as quickly, the interest in the so-called AI cooled. So we have experienced in these decades very significant ups and downs. Then, at a certain point, the turning point of the so-called “AI learning” arrived, and the quality and form of this technology experienced a strong acceleration that made it progress rapidly.

I heard that OpenAI, the creators of GPT-4, claim that this tool can understand. What do they mean?

A definition of intelligence could be “the ability to make decisions in certain contexts in order to achieve an objective.” There is a very open discussion and perhaps even an evolution from a sort of integral anthropocentrism for which only humans are intelligent, and therefore intelligence would be a human prerogative.

For quite a long time, it was thought that intelligence could be quantified, and a mechanism was invented to give a numerical value to intelligence—the famous IQ. Over time, analysts have understood that human beings have various types of intelligence, and therefore speaking of intelligence in the singular has become, for scholars, somewhat reductive because each human has a certain quantitative level for each of the different intelligences that exist; therefore, each has their own shade of intelligence.

So, science has been forced to question the definition of intelligence, and the studies developed in various fields have helped us understand that the world actually contains an intelligence that goes beyond human intelligence. For example, we have seen that the different animal species are all intelligent, that is, they are able to interact with the environment and make decisions based on an objective, often that of survival and reproduction, which are not trivial objectives. Indeed, we are talking about what are, in all respects, the primary tasks in the evolutionary processes of life.

As a result, we have understood that not only do humans have different types of intelligence, but that there are many outside of humans. Another example is the discovery that the mycelium, the vegetative apparatus of fungi that forms an immense underground network, connects plants to each other, allowing them to communicate, managing, for example, to understand each other's needs and to intervene to support those among them who are suffering. They know perfectly well that collective survival is the only way to survive and that, if you are alone, it is difficult to stay alive. So even the plant environment itself lives on a concert of interests, adaptations, and decisions—a system of connections that we can define as intelligent.

But even today, science is working on the concept of intelligence, and this highlights how ambiguous and difficult its description is. For example, the Santa Fe Institute in Colorado—which in the 1980s implemented the paradigms of chaos theory and which is a great melting pot of skills at a global level—just this year in August opened a study channel to work on the meaning of intelligence. We are going through a phase of rethinking a certain type of structure, and the idea is making its way that even some products generated by human beings could have their own form of intelligence, which obviously has nothing to do with human intelligence, with that of a body that is born, grows, lives, and experiences emotions. But this is what we define as human intelligence, and it is not the model of intelligence in absolute terms.

Well, go figure it out, Danilo, how we are made and how the world is made. Luckily, there are many things still to discover.

We will return to this topic because it is very interesting. Bill Gates, who also works closely with OpenAI, says that it is not clear how knowledge is acquired by artificial intelligence. Even Tim Urban, a blogger who is followed by Elon Musk in particular, says that it is not clear how AI works because it forms itself. Since I am not a technician, this aspect is not at all clear to me.

Of course, because we need to understand the technological level we have reached. Otherwise, we have difficulty digesting information of this nature due to a legacy we have carried with us for a very long time: that of the instrument.

Humans probably became humans when they differentiated themselves from other animal species by inventing tools. Stanley Kubrick describes it wonderfully in 2001: A Space Odyssey when he shows the ape playing with a bone and realizing that the object allows him to generate an action, such as breaking the skull of a dead animal. At that moment, he has the intuition that the bone can multiply the power of his hand. It is in that moment, probably, that humans distinguished themselves from other animals, thanks to the invention or discovery of the tool. For hundreds of thousands of years, we have been working to refine these tools and multiply them until we have computers. This is why I speak of a legacy that has lasted a very long time in the refinement of things that serve to increase the power not only of our limbs but also of our senses. This implies that, as a human being, I can create tools, even powerful ones, to realize what was previously imagined.

This thought reaches up to the development of industrial society, which constitutes a clear separation between the before and after in human history because, from that moment, the tool becomes a machine, and the machine incorporates within itself an action that was previously exclusively human, turning it into a machinic system.

For a few decades, computer systems worked with mechanical logic; the computing power, first with valves and then with microprocessors, was designed like the machine system, with an engineer deciding how the machine should do the computing. Then there was a human decision about how the microprocessor process and the software that used that microprocessor would work.

Instead of having a microprocessor that hosts software, and both are designed by an engineer, thus remaining in the presence of a result that remains within a human dimension, today there are much more effective microprocessors that are made like a wafer, that is, they are composed of multiple layers. The microprocessor has an input that is generated and that, consequently, produces an output. The path inside the wafer is managed in a completely random way by the microprocessor, and if the result is a negative result, the path taken for the calculation is downgraded in importance; if the result is positive, instead, its usefulness is strengthened.

So, this path structure that makes the calculation have a result is completely random; it is not knowable in the sense that it is not possible to understand what path it takes; in fact, they call them "black boxes." So, the operating mechanism is completely new, and the point is that this mechanism that controls whether the result is good or negative, which in an initial phase of the development of artificial intelligence was done under human control, is now done automatically. That is, there are software programs that manage billions of good, bad, good, bad, in fractions of a second, therefore with much more effective training levels, from a technological and engineering point of view, and with results that are not always perfect, but with an ever-greater capacity to incorporate information and manage relationships with the environment.

For example, a robotic arm has the ability to have control of an action—we are therefore starting to have an interaction between these calculation capabilities and actions in the physical world—and as happens with learning in a human being who, in order to perform an action, such as picking up an object, perhaps makes a mistake by dropping it, the same happens for the robotic arm when it initially acts in an apparently thoughtless manner before realizing what is the best way to firmly grasp the object. We are at the beginning of a structure that is modifying the ways in which these machines learn.

The very famous and used Chat GPT 4 has been far surpassed because, since April of this year, new structures have begun to emerge with which the response algorithm is developed, which are making a real leap in quality. It seems that the latest version, “Chat GPT o1," which came out at the beginning of September, is based on real reasoning; there is a qualitative leap in the logic with which the system works.

Your answer gives me the opportunity to ask other questions that came to me while you were speaking, but I'll save them for our next episode.