Can AI “Quench” Humanity?: The Reason Why No One Stops Its Development
Artificial intelligence ( AI) will be capable of the best and the worst. The same becomes the greatest ally in the fight against poverty and climate change that jeopardizes humanity’s survival in an indeterminate future. In the style of an eighties science fiction movie.
This is confirmed by approximately 350 of the largest AI researchers and executives globally, who yesterday signed a statement in which they made clear the need to work so that this type of technology does not end up becoming a potential danger, even for the world’s survival.
Mitigating the risk of extinction by AI should be a global priority, along with other risks on a societal scale, such as pandemics and nuclear war.
Among the signatories to the declaration are some of the big names leading this new golden age of AI, which began a few months ago with the thunderous irruption into the network of that talking machine capable of answering practically any question called ChatGPT.
That is the case of Geoffrey Hinton, who, in early May, left his position as Google’s vice president of engineering to speak openly about the dangers of AI without conflicting with the interests of the company.
All three are considered the fathers of artificial intelligence and were recognized in 2022 with the Princess of Asturias Award for Scientific and Technical Research.
The name of Sam Altman, executive director of OpenAI, the company that created ChatGPT and DALL-E, a tool capable of generating images from a handful of words from the user, also appears in the list of signatories prominently at the beginning. In recent weeks, the executive has repeatedly warned about the need for public and private actors to reach agreements to solve the potential dangers of AI, which has great potential to destroy jobs and generate misinformation.
Artificial intelligence should be treated “with the same care” as atomic weaponry and advocated the creation of an international body similar to the IAEA (Organism International Atomic Energy Agency of the UN) in charge of its control. Despite the risks, it is good that ordinary users have intelligent systems within reach since it allows them to get to know them and detect possible risks.
The declaration of the signatories is a movement of scientists, mostly from US institutions,” says Inma Martinez, chair of the Expert Committee of the Global Partnership for AI, an agency of the OECD and the G7. Regarding the fact that entrepreneurs from the companies that are developing AI, such as Altman and Hassabis, are precisely what warn of the need to control the developments they carry out, the expert points to the recent meetings that the main executives have held with governments of the United States and Europe.
About three weeks ago, the White House summoned the executives of Google, Microsoft, and OpenAI. They read the primer to them. Either they would fix the AI so that it wouldn’t cause problems, or they would regulate them harshly. From there, the discourse changes, and they begin to ask for control.
Regarding the possibility that, as experts warn, AI could become a danger to human survival, Ulises Cortes, professor of Artificial Intelligence at the Polytechnic University of Catalonia, points out in conversation with this newspaper that “AI is a scientific discipline” and, as such, the human being can limit its functionalities.
In other words, at least for now, the developer can control the machine so that it does not harm humanity: We indeed have better algorithms every time. Machines can make decisions on their own, but always specific ones. They are not going to decide to kill everyone on their own. And if such a machine comes along, we’ll have to talk about the idiot who created it.”
Despite the potential risk of artificial intelligence, the big Western tech companies are convinced that the advance of technology should not be stopped. Instead, you have to create regulations that control the development and security standards. And not only because of the potential economic gains that intelligent systems can generate but also because of the fear that antagonistic states will use technology to harm the interests of the United States and the European Union, as is the case with China.
For the past few months, the US has been trying to limit the Asian country’s access to US technology to, among other things, slow down its advance in artificial intelligence. Something that would have little impact on the Chinese industry. Since 2020, organizations from the Asian giant have launched 79 models similar to ChatGPT, and the country is determined to redouble its efforts to develop its artificial intelligence and challenge the United States for hegemony, according to a recent state report collected by ‘Reuters.’
There are countries that, to create weapons or compete with their rivals, use AI to harm people.