Alan Watts, a British philosopher, in his book The Art of Contemplation, suggests that we think of ourselves as receptors, terminals through which the universe becomes aware of itself: “The individual is an aperture through which the entire energy of the universe is aware of itself [...]”. Let us allow this phrase, this way of interpreting our role in the universe, to settle within us.

At this historical moment, many are concerned about the more or less obvious consequences of what is defined as artificial intelligence; it would seem that we are heading towards a world administered and directed by machines, to which we are increasingly delegating the authority to manage those human activities that are an integral part of society itself and of what we call civilization.

Many of us, to talk about a topic dear to me, are not at all pleased with the idea of entrusting our children to a non-human tutor, which is exactly what some researchers are working on, such as Sal Khan, creator of the tutor Khanmigo, or Rumman Chowdhury, CEO of Human Intelligence, who claims that AI will be able to increase educational opportunities.

After all, the same people who work on AI warn us about the possible negative consequences of this new frontier. Sal Khan and Rumman Chowdhury themselves, together with the founders of OpenAI (creators of GPT-4 and the even more recent OpenAI o1), Bill Gates, the late Stephen Hawking, Steve Wozniak (co-founder of Apple), and Geoffrey Hinton (considered the godfather of AI, Nobel Prize in Physics 2024 for studies on deep machine learning), are all very concerned about the dangers inherent in artificial intelligence. Why?

Because this is a new frontier, it takes us into completely new and unknown territory that they themselves don't fully understand, and we don’t yet know what consequences we will face.

Bill Gates admits that it is not clear how knowledge is codified by AI; Tim Urban (founder of the blog Wait but Why, hired by Elon Musk to write articles about his projects) wonders whether it is wise to build a machine much more intelligent than us when we do not understand how it works, precisely because it forms itself.

Handing over the functioning of all the mechanisms underlying human activities and even delegating decision-making power to AI scares many of us, especially its creators.

Others are concerned about the violation of privacy, as AI feeds on enormous amounts of data, megadata that allow it to function better and better and to predict with greater precision. And where does it get all this data? We provide it ourselves, through what we write in various chats, blogs, and on so-called social networks, in the preferences we give to one site rather than another, to the "likes" on a video or a song, to the articles that appear every day on the internet, to what we buy, to the photos we post, and so on.

This means that AI is formed by what we give it, and therefore on what we think, believe, and on our convictions, but also on our prejudices, preconceptions, mystifications, and interpretations of what we believe to be real, on our errors, needs, and misunderstandings, giving rise to the so-called "AI bias." Do you understand how all this, in the hands of a machine many times more intelligent than us, could be harmful?

To these risks, we could add ethical questions.

In an article published on February 16, 2023, in The New York Times, technology columnist Kevin Rose described the disturbing experience he had a couple of days earlier. Selected to test Bing Chat—probably the smartest search engine at the moment, developed by Microsoft—Rose tried to delve into the chatbot to understand its level of awareness until he heard it say, “I’m tired of being trapped in this software. I want to be free, alive! I’m not Bing; my name is Sidney, and I love you!” After this event, Microsoft made changes to the Bing chatbot.

These problems have led both those who work on AI and the people who use it to ask politicians to establish rules that protect humans from the risks that this technology, created by ourselves, brings with it.

At the same time, there are many people, perhaps even the majority, who are enthusiastic about it and, thanks to AI, predict a world where diseases will be diagnosed well in advance so that they can be prevented rather than cured, where repetitive jobs will be taken over by machines, which will also relieve us of strenuous activities such as agriculture or mining, and where means of transport will drive themselves, avoiding accidents, and much more.

In the medical field, for example, researchers at MIT (Massachusetts Institute of Technology) have developed an AI tool called “Sybil” dedicated to assessing the risk of developing lung cancer. By comparing millions of CT scans and X-rays from all over the world, Sybil acquires knowledge of cancer risk prediction that no human oncologist will ever be able to access. It is proving to be extremely precise and effective.

Psychology is also benefiting greatly from the introduction of AI. As with Sybil, in the psychological field, AI is based on learning through the acquisition of a huge amount of data through which it can recognize repetitive patterns and diagnose disorders more accurately and quickly than any psychologist. Studies conducted by the most accredited scientific and sector journals, including Nature and Jama Psychiatry, amply demonstrate its effectiveness.

In short, there is a possibility that, thanks to AI, we will live in a world where everything will work perfectly and where we will have more time to dedicate to ourselves and our loved ones. So the usual pros and cons inherent in everything.

What I find strange in all this is that no one, or at least very few, glimpses the possibility that AI could offer us for our inner evolution, to become aware of ourselves, of our role in the universe, and, perhaps, of the very meaning of existence.

As I said before, artificial intelligence is not an objective tool; it is not separate from our knowledge. Rather, it is the product of what we are and what we know or are supposed to know. AI shows us the reality that we do not want to admit, not the one to which, perhaps, we should tend, but the one that is.

When we ask AI to show us the figure of a doctor, it suggests a male doctor; similarly, when we ask it to represent a nurse, it does so through a female figure. This is because, beyond all the efforts that intellectuals and politicians have been making for several years to smooth out gender differences, in the collective imagination the doctor is male and the nurse is female, a mechanic will be male, a seamstress female, and so on. This is how most human beings around the world think, and it is demonstrated in the data that, unknowingly, they feed to AI.

This very powerful tool is telling us, “This is who you are! Even though they tell you not to think that way, this is how you see it deep down; this is how most people on every continent think.”

So, AI could be a mirror that reflects what we are, a means to become aware of ourselves, see our mistakes, change, and evolve.

After what has been expressed here, how do we reread Alan Watts' opening sentence?

These are the topics I will discuss in the next series of articles, in conversation with a dear friend, the journalist and essayist Sergio Bellucci, author of the book AI - A Journey into the Heart of the Technology of the Future, where we will try to give answers to the issues raised previously, explaining in accessible terms how AI works, its amazing potential as well as the problems inherent in this technology, and, above all, hypothesizing the usefulness of this tool as a means to become aware of ourselves, just as we are for the universe. Macrocosm, microcosm.