Today there is worldwide concern over ecological problems and climate change in particular. A systems-view of the world shows us that pollution and climate change mutually interact, thereby spawning further crises: the loss of biodiversity, the huge wealth gap between rich and poor that forces the dispossessed to migrate towards wealthier nations, and the destabilization of democracy. There is much to be concerned about, and the rage and anxiety felt by those most affected is exacerbated by the indifference of the majority of influential governments.
Precisely because of the urgency of these ecological issues, we have a tendency to underestimate the threat posed to our present globalized society by the meteoric rise of artificial intelligence (AI). Nevertheless, within the field of AI itself, many powerful, critical voices are alerting us to the possible dangers of AI. One of the best known is Elon Musk, himself a leading exponent of AI, who has been warning us for some time that the domination of AI could lead to a catastrophic disruption of our civilisation1.
Similarly, the Swedish philosopher Nick Bostrom (Bostrom, 2017) has pointed out the risks involved in creating super-intelligent computers or robots, which can blindly carry out their set tasks without regard to the goals and ethics of mankind. The cosmologist Stephen Hawking has also issued strong words of warning. Not all the players in AI share such a bleak outlook, but there is no doubt that we are relying more and more on artificial intelligence in all aspects of our private and social life.
In this essay, I would like to focus on one particular aspect of this problem. Namely, that while AI is enabling us to solve more and more of our technical problems, our awareness of the spiritual dimensions, and consciousness in particular, are shrinking—perhaps even withering away. This is not a new idea: Yuval Harari devotes a whole chapter of his book (Homo Deus, 2015) to the problem of the growing disconnect between intelligence and consciousness. Here we will look at how this issue relates to some of the concepts discussed in earlier essays of this book.
Part One: Some Basic Features of AI
Before discussing Artificial Intelligence, let me start with a caveat. Intelligence, in this context, means the ability to solve a problem or carry out an assigned task. It does not mean understanding the meaning of the task. In other words, a computer may beat the world chess champion Kasparov, but it does not know what a chess game is. It simply, and blindly, executes some programmed calculations of probability. A computer may identify a cat in a complex picture containing many other items, but it does not know what a cat is. A two-year-old immediately knows the difference between a cat and a dog, whereas the best of our computers can only inform us that there’s a 95% probability that this is a cat. And in order for them to do so, their archives have to be stuffed first with the pictures of thousands of cats and thousands of dogs. Which means that generally the most advanced computers are still far behind the human level of intelligence—what in AI vocabulary is called AGI, artificial general intelligence. The fact that they cannot even match a two-year-old’s mental capacity serves to remind us that a child’s brain is the result of ten million years of biological evolution. The ambition of some DeepMind scientists to create something comparable to a human mind from scratch, from a tabula rasa, by adding supervised data piece by piece, thus appears rather naïve.
That said, it is remains true that AI’s performance in selected, specific tasks, is already impressive. With Internet we can now translate an entire book from any language to any other language in a fraction of second. And, as well as ransacking Wikipedia, many of us have perhaps been engaging with Siri, or Alexa, in private conversation—getting the impression that they understand us and are responding intelligently.
One thing is certain: more and more work will be taken away from humans and given to robots and machines. At present, this mostly concerns routine repetitive tasks. For example, according to AI experts it is entirely possible that three thousand workers on a car factory production line could be replaced by thirty robots working twenty-four hours a day, without meal or coffee breaks and unhampered by pregnancy. But nurses, traffic wardens, pilots, soldiers, truck and taxi-drivers could soon be replaced too, not to mention white-collars workers, lawyers, travel agents, or teachers. Already, more and more medical diagnoses are being carried out by computers containing millions and millions of stored data from any given branch of medicine. All doctors will need do, is show the computer a picture or X-ray of what they fear may be a tumour and they will receive a very reliable answer within seconds. AI techniques are already being used for solving difficult problems of electric and electronic circuitry, handling traffic jams, assisting navigation, planning “smart cities” and building sporting mega-facilities. They are also helping to deal with many engineering problems that are too complex for humans: operations like projecting a crucial bridge or a crucial dam can be enormously simplified, possibly with the help of 3D printing.
A few more facts about AI. At the Massachusetts Institute of Technology, perhaps the most important technological university in the world, there are over a thousand researchers working on AI. There are also a few thousand at Stanford and in Silicon Valley, as well at Google, which, like Toshiba, is a kaleidoscope of diversified AI companies.
You can read all about this in Martin Ford’s book, The Architects of Intelligence (2018)2, in which he interviews the leading players in the field of AI and discusses their ambitions. There you will encounter the super-hero Ray Kurzweil, director of engineering at Google. He has received no fewer than 21 honorary degrees and several medals from three different US Presidents. He owns more than 20 patents in AI. You will also meet, among many others, Rodney Brooks, one of the world's foremost roboticists, as well as prominent women roboticists such as Cynthia Breazeal, Director of the Personal Robotics groups at the MIT Media Laboratory; Daniela Rus, Director of the world's largest research organization on robotics, also at MIT; Fei-Fei, chief scientist at Google Cloud, who is interested in the democracy of AI and in attracting women into AI research; Rana el Kaliouby, co-founder and CEO of Affectiva, which specializes in AI systems that sense and understand human emotions.
Ford asks his interviewees four basic questions: whether and to what extent AI will be disruptive for our society, especially as regards human job losses; what kind of dangers AI may bring to our world; where the AI arm race between China and USA could lead, and whether and when computers will ever achieve AGI, artificial general intelligence. (You are wondering where Europe is in all this? Well, simply absent).
This is not the place to review their detailed and very diverse answers, but all researchers agree that AI is still in its infancy and that nobody can predict with certainty what will happen in the next twenty or thirty years. However, Mark Zuckerberg, the founder of Facebook, has expressed the opinion that smartphones, and even his own Facebook, will soon become obsolete, because our brains will interface directly with Internet by means of microchips or headphones3. One of the implications of this, if it actually happens, is that people will communicate with each other telepathically, simply by their thoughts. It will be another world, the world of transhumanism or post-humanism. Science fiction? I don’t know. But I do know that it would be a great mistake to take the words of visionary entrepreneurs like Mark Zuckerberg and Elon Musk lightly.
Part Two: The Widening Disconnect
Whatever the future holds, one thing is already clear. Machines will increasingly be taking over intellective functions from humans. What is more, they are likely to acquire the built-in capacity to improve their own intelligence, thereby becoming—to use Nick Bostrom's term—super-intelligent. So, what impact will all this have on us humans over the next few decades? To be able to imagine answers, we need first of all to accept the idea that AI will increasingly permeate every aspect of our everyday life, just as electricity did a century ago.
I believe there are two spheres where the rise of AI will lead to a loss of capabilities in humans. One is that of human intelligence, our capacity to tackle or solve simple problems. The other is that of the human spirit—our feelings, our consciousness. It is the latter which interests me more here, but let us first glance briefly at the loss of simple intellective functions. In AI jargon it is known as "deskilling", namely our increasing inability to carry out quite elementary tasks, whether manually or mentally. To give a personal example: before the advent of smartphones, I could remember by heart at least twenty telephone numbers. Now I can’t even remember my partner’s. Why should I? It’s in my iPhone. And what about our sense of orientation? Most of us nowadays, even taxi-drivers, are so reliant on GPS that we’re no longer capable of thinking how to get onto the motorway. Readers will be able to add many more examples of their own, and the AI literature contains numerous cases of the decline of manual dexterity in humans now that machines are much faster and can do things better.
Let’s come now to the impact of AI on our spiritual and emotional sphere. I believe this is a particularly important issue, which thinkers have largely overlooked. (There are, of course, a few exceptions: we have already mentioned the chapter devoted by Harari to this problem in his Homo Deus). When approaching it, we need to remember that AI is not a separate entity but as an intrinsic part of the modern digitalized world with its mass media and social media.
One major way it affects our soul and spirit is by decreasing our sensitivity for the suffering of others. On TV, in films, in mass media, we watch hundreds of murders a week, often depicted in shocking, gory detail; realistic autopsies exhibit internal organs as if they were simply merchandise. We seem to be losing not only our respect for others, but also our sense of the sacredness of the body and of death. Is it surprising, then, that when we hear of a hundred migrants locked in a ship’s hold without food or water, fewer and fewer of us feel any compassion? Pope Francis sees this indifference, the loss of empathy for the situation of others, as the great malady of our time. In the media, violence has become our daily bread: even children’s cartoons are full of bombs, laser guns and killer drones. They are toy weapons, of course, yet in reality they imprint on children's minds the idea that shooting and slaughtering others is a normal, routine human activity.
As regards young people in general, look at what smartphones and Internet are doing to them. Many adults of us have difficulty interacting with teenagers—they’re always on the phone and don't want to be disturbed. Some psychologists have suggested that many young phone-addicts suffer from a kind of autism. Strange: to be connected with the whole world by Internet and yet to be in a state of autism! Not that adults are immune to this, of course. We too are often slaves of our iPhones. You may have seen that cartoon of a couple naked in bed together, but lying with their backs to each other and their eyes glued on their smartphones. Are smartphones disconnecting us?
Nor is this the worst of the dangers besetting us. Take, for instance, subliminal advertising in the mass media. It brainwashes us into desiring food, drink and clothes that are unnecessary and often harmful to us. Of course, this is what a consumer society does. With regard to this issue, psychologists Allen Kamer and Mary Gomes (1995) have gone so far as to claim that "in order to transform consumerism into a way of life, modern capitalism is implementing the biggest psychological project ever carried out by the human race”4</sup.
What this amount to is constructing a false self by distorting real human needs. Manipulation of the self and loss of awareness go hand in hand. One form of manipulation that we are only too familiar with nowadays is fake news, which is making it more and more difficult to distinguish truth from lies, especially during elections.
Another major cause for concern is the fact that AI, the digital world, and mass media, are all in private hands, in particular those of the four "GAFA" giants, meaning Google-Amazon-Facebook-Apple. (In China, they have the BATX...). It is quite frightening to see how billions of dollars dance from one giant AI company to another: WhatsApp sold for 22 billion dollars, PayPal bought Kernel for 800 million... All this power in the hands of a few dozen billionaires!
Which brings us to a crucial issue which has received much press and television coverage in recent months, namely the threats globalized social media pose to our privacy. It is now clear that social media like Facebook, Instagram and others are continually monitoring all our actions, storing everything we say and write about our tastes in food, cosmetics, clothes, travels, sex, dreams and nightmares. When, for example, a young man or woman turns eighteen, their parents might well ask Big Computer (and some actually have) what their son or daughter should do in life. Should they be engineers or—as they naïvely hope—artists? What parents can compete with a super-informed Computer, whose statistical analyses are based on eighteen years of close observation? And why shouldn’t parents, incidentally, ask the computer who might be an ideal partner for their child?
So far, our scattered observations have been responses to specific AI-generated phenomena. But what about the underlying philosophical assumptions on which AI is built? Is AI generating a philosophy of its own—a way of understanding our mind and the world in general? The buzzword in this field is “transhumanism”, which for some exponents, rather than a philosophy, is a religion or even a kind of fanatism. A typical transhumanist thesis is that human beings, thanks to AI, will be able to transform themselves into different beings with vastly augmented capabilities. This is based on the premise that the human species in its current state does not represent the end of our evolution. Just as the Neanderthals were wiped out by Homo Sapiens, so present-day humanity will be replaced by cyborg trans-humans, who have merged with machines while remaining human, as well as transcending death and ageing5. The transhumanist movement was started by one of the first “professors of futurology”, who called himself FM-20306. It took root in 1980 at a gathering of convinced intellectuals (including philosophers like Nick Bostrom and Max More) in California, and is now well established, especially in the US.
But in addition to this peculiar brand of futurology, a subtler, underground philosophy is emerging from the current state of AI. I think this new philosophy amounts to a kind of reductionism, epitomized in that AI keyword: the algorithm. An algorithm is an operational procedure for achieving a given result—a logical, causal series of linked events that bring about a solution. In recent years, AI experts have begun to argue that all biochemistry and even the human brain itself runs on algorithms. Harari, for example, has put forward the hypothesis that fear, which we generally consider to be a feeling, is actually an algorithm. A monkey sees a snake and within milliseconds the brain's algorithm calculates that there is a certain likelihood the situation could prove life-threatening. This algorithm triggers a shiver of fear—a mechanical brain operation. I’m guessing that most of us would say: no, fear, like love, is a feeling, a spontaneous emotion, and the brain has nothing to do with it. But we have no way of demonstrating this, especially when AI people talk in terms of mechanisms that require only milliseconds. How can we prove that an event lasting a mere millisecond doesn’t precede—and therefore cause—our feeling/emotion? On the other hand, the fact that we can’t prove it at this point in time doesn’t necessarily mean that our intuition is wrong.
Let me conclude by reiterating my main contention: the more machine intelligence takes over our lives, the more traditional human skills, and with them, the human spirit, will be diminished. How far the disconnect is likely to increase over the next few years, and how far we need to worry about this, is a matter of opinion. But there is no denying that it already exists, and that we must do something about it. What we should do isn’t easy to imagine. Some thinkers, like Laurent Alexandre (2018), are placing their hopes in education—the creation of a new kind of school, based on (guess what!) transhumanism. Certainly, we cannot start a crusade against AI - the flow of time is irreversible. But equally certain is, that we have to ponder well what we have to give in change of it. However we go about it, it will be a hard nut to crack, but a first, crucial step is to become aware of the problem.
References
Laurent Alexandre, La guerra delle Intelligenze, EDT, 2018
Y. N. Harari, Homo Deus, A brief history of tomorrow, London, Harvill Seeker, 2015
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford Univ. Press, 2017
Martin Ford, The Architects of intelligence, Packt, 2018
Acknowledgment
The Author would like to express his gratefulness to prof. William Dodd, who edited this essay, but in fact, thanks to his criticism and questioning, he has considerably contributed to the final form of this article.
1 You can find some of the ideas of Elon Musk, plus his controversy with Mark Zuckerberg, in a couple of YouTube videos: for example, the one with Michio Kaku on May 16, 2018; and the CNN YouTube on July 25, 2017.
2 The TED video of Martin Ford on AI and economics has been viewed by over two million people and is well worth watching.
3 See Laurent Alexandre, La guerra delle Intelligenze, EDT, 2017.
4 Quoted in Leonard Boff and Mark Hathaway, The Tao of Liberation, 2014.
5 See Mark O’Connell’s book To Be a Machine, Granta, 2017.
6 See Wikipedia.