The huge Covid-19 storm is undoubtedly still at its peak since it appeared at the beginning of 2020. The full economic, social and also technological effects are not totally visible yet (although we can enumerate some of them at least in the IT field such as the enforcement of remote distance learning/working, the huge expansion of streaming services, deep corona fakes, stronger phishing and malware campaigns, the new 5G network, etc..) but in any case, they will be tremendous. It would therefore be risky to put forward at this time a general theory of the geopolitics of the pandemic. Without being an expert on the matter at all, can we begin to think of the possible impacts the virus brought on international politics?
We might start saying the pandemic is revealing the characteristics of a new world, undoubtedly. First, we can see the weakness of our political environment, (especially our health organizations & institutions) and then, a considerable shift in the focus of power (ALL POWER) towards big IT companies (Facebook, Amazon, Twitter, Google, Huawei, Microsoft), of course with states behind them (China, US) mainly putting all their efforts into one goal: being the pioneers and “rulers” over the Artificial Intelligence tendencies. Surprisingly we saw the competition taking place at the very beginning of all this chaos: during the first week of the epidemic, the people from China was suffering the difficulties we all know already, resumed in forced closures of their factories, shops and services; and then appearing to overcome the epidemic thanks to strong authoritarian quarantine measures, but overalls combined with unprecedented use of the AI; nobody was really aware of its super-fast development and implementation to “intervene” in the virus crisis.
Just a couple of years ago, Russian President Vladimir Putin and Elon Musk warned that the country developing technologies using artificial intelligence will most probably dominate the world.
In any case, Russia is apparently a minor player, as well as the remaining countries around the globe, except for the United States and China, particularly this last one is quite interesting to address. Released in July 2017, the New Generation Artificial Intelligence Development Plan (AIDP) acts as a unified document that outlines China’s AI policy objectives. It defines three key steps summarized in the following:
- By 2020, China wants to increase its competitiveness and optimise its AI development environment. In monetary terms, China aims to create an AI industry worth more than 21 billion USD. Then, it seeks to establish initial ethical norms, policies, and regulations for the AI initial growth.
- By 2025, China aims to have achieved a “major breakthrough” (as clarified in the document) in AI theory to become world-leading in many applications/fields. Of course, also targets an increase in the worth to over 58 billion USD.
- By 2030, China seeks to become the world’s main centre for AI and its value over 1 trillion, period.
On the other hand, while China has completely bet on AI as a key component of its economic growth, the US remains a global AI leader with an ecosystem of companies pioneering such technology that we already mentioned before: Google, Facebook, Amazon, Microsoft and also Apple. The US government additionally has also invested billions in developing AI infrastructure, and universities like MIT and Caltech have bolstered the research and teaching on the subject in efforts to maintain and enhance the advantage in AI capital.
Unfortunately, the others can only contemplate a future in which technological, economic, and military supremacy becomes the domain of those few countries with the deepest pockets. The implications of having a small handful of countries controlling AI in the future are remarkable. On one side, these technologically advanced countries could be the jealous keepers ensuring that significant efforts are devoted to its development on a long-term basis and eliminating potential competitors. It is also certain that leading companies in these countries will get an even more noteworthy lead in the global economic arena, granting them a substantial advantage. The militaries of these countries would also become primary beneficiaries of the AI technologies, spurring a global race for superior weaponry and propelling the world into a dangerous new type of war.
Electric sheep dominance
Until now we have just given some insights into general drivers in which AI can emerge as an important vector over the geopolitical power, but in a more specific way, what can be the possible implications or consequences to the simple common human being? While Artificial Superintelligence is unlikely for at least a couple of decades, the term “singularity” would be the biggest concern for many people.
Singularity is a term that comes up often in discussions about general AI and honestly there’s a lot of confusion and disagreement on what the singularity really is. But a key thing that most scientists and philosophers agree on is that it is the notion that exponentially accelerating technological progress will create a form of AI exceeding human intelligence (and escaping our control). The concern is that this superintelligence may then deliberately or inadvertently be a treat to humanity or in the best scenario replace us. Another important aspect of the singularity is time and speed: AI systems will reach a point where they can self-improve in a recurring and accelerating fashion; this is called “recursive self-improvement” and would lead to an endless cycle of ever-smarter AI. It could be the digital equivalent of the genetic mutations organisms go through over the span of many many generations, however the AI would be able to perform it at a much faster rate.
Recursive self-improvement is the needed step to reach Singularity. And for one of the brightest individuals of AI in the world, Ray Kurzweil, Google’s Director of Engineering plus a well-known IT “oracle” with a good record for accurate predictions, it will be reached by 2045. According to him:
2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.
While the implications of the AI revolution on global order have only begun to be contemplated, it is clear that disruptions of the future may not merely involve land, natural resources, and populations, but may determine the very existence of the human being. Rather than serving to increase efficiency and mesmerize us, an AI-dominated future could result in the greatest concentration of resources and power the world has ever known.
Due to the complex dynamics surrounding the rise of AI, multilateral approaches are needed especially in building strong ethical and safety standards, and orient new technologies for social benefit. AI is rapidly and significantly transforming the world and we must ensure its development plays out for good, for our own sake.