From princes to billionaires: who wants to ban AI “superintelligence” and why
23 October 13:56
The global debate on the future of artificial intelligence is intensifying. More than 800 influential people – from Nobel laureates to billionaires and royalty – have signed a petition calling for a temporary halt to the creation of “superintelligence” capable of surpassing the human mind. This was reported by "Komersant Ukrainian" with reference to Forbes.
The initiator was the Future of Life Institute, which deals with existential risks to humanity. The authors of the document emphasize that the development of super-powerful AI systems should be suspended until their safety is scientifically proven and society agrees to their existence.
What is “superintelligence” and why is it feared
“Superintelligence” is defined as AI that not only helps humans but surpasses them in decision-making, planning, and strategic thinking.
Unlike current models that perform certain tasks, superintelligence can make decisions independently, generate its own goals, and optimize itself without human control.
The idea that artificial intelligence can outperform humans in cognitive abilities is no longer a fantasy. Leading experts believe that this could happen by 2030.
Who signed the petition
Among the signatories:
- Joshua Bengio, Jeffrey Hinton, and Stuart Russell, pioneers of modern AI
- Richard Branson, billionaire founder of Virgin Group
- Steve Wozniak, co-founder of Apple
- Prince Harry and Meghan Markle
- Former high-ranking officials of the White House and American security agencies
- Political figures and media personalities Steve Bannon, Glenn Beck
These people represent different political camps and industries, but they are united by an awareness of the risks of unchecked AI development.
Who did not sign:
Sam Altman (OpenAI), Mark Zuckerberg (Meta), and other executives who are actively investing in superintelligence and believe that it can be created this decade.
What the petition demands
The signatories call for
- introduce a global ban on the development of AI systems capable of surpassing humans
- introduce a global moratorium that will be in effect until 100% safety of such technologies is proven
- involve international bodies for control
- create rules that will force companies and governments to undergo security audits and public discussion
- create international safety rules
- allow only those technologies that have been experimentally proven to be safe to be developed
- not to start massive implementation of such systems until the society approves it
This is not an attempt to “freeze progress forever,” but a requirement to prevent a catastrophe if machines become uncontrollable.
Watch us on YouTube: important topics – without censorship
What society thinks
According to polls:
- 80% of Americans support the introduction of strict rules for the development of AI, even if it slows down innovation.
- 64% of respondents believe that the development of superhuman AI should be either banned or allowed only after proving its complete safety.
Has this already happened?
Future of Life has already initiated a similar petition in 2023, calling for the suspension of large-scale experiments with artificial intelligence. It was signed by Elon Musk.
This time, Musk has not yet supported the petition, but he has repeatedly stated that AI could be “the greatest risk to humanity.”
Why it is important now
There is no joint international control over how AI is used in the military, political, or economic sphere.
AI systems are evolving rapidly: from textual models to autonomous agents that can already write code, manage processes, and make financial decisions.
The competition between corporations is becoming an arms race.
What it means for Ukraine
Ukraine is actively implementing AI in the military, economic, and information spheres.
If a global AI regulator is created, Ukraine should be among the participants to avoid being among the countries that consume rather than create rules.
Regulation could affect investors, tech startups, and the defense sector.
Read us on Telegram: important topics – without censorship