The creator of ChatGPT, Sam Altman, is sounding the alarm: the threat of AI extinction is as severe as a nuclear war

Imagine a world where the same fear factor we associate with the prospect of nuclear war is linked to the imminent threat of . creator , along with several other notable names in the tech industry, is sounding the alarm, claiming that the potential risk of extinction by AI must be treated as seriously as other catastrophic global events.

Is the rise of artificial intelligence a global extinction threat?

Imagine waking up another day to a warning about how artificial intelligence could wipe out our civilization. This time it comes from top entrepreneurs, experts and scientists, led by Sam Altman, the creator of ChatGPT. They have issued a brief statement claiming that the potential “extinction risk” posed by AI should be regarded with the same gravity as other global catastrophic events such as nuclear war.

The rather brief statement the signatures of notable figures in the AI world, but it's surprisingly succinct. It's a handful of words that encapsulate an expression of concern, rather than a specific plan of action to avert the supposed annihilation. “Mitigating the risk of extinction by AI should be a global priority, on par with other societal-scale such as pandemics and nuclear war”, reads the statement published on the website of the Center for AI , a non-profit organization.

Opening up the debate: the hidden message behind brevity

There's a reason for such a brief statement to express such a big warning. According to the organization, many parties, including experts, legislators, journalists and the general public, are now discussing the risks of artificial intelligence. However, they find it difficult to communicate the most serious and immediate dangers brought about by the technology in a simple, hard-hitting way. With this message, they aim to “open the discussion” and identify the experts who are taking this issue seriously.

It should be noted that Sam Altman is not the only signatory of this message. Other names such as Demmis Hassabis, CEO of Deepmind, Dario Amodei, CEO of Anthropic, Emad Mostaque, CEO of Stability AI, or Kevin Scott, CTO of , are also on the list. Renowned researchers such as Geoffrey Hinton, Yoshua Bengio and Lex Fridman are also among the signatories.

Do ChatGPT and other AIs represent an extinction risk for civilization?

The press release, while expressing concern for AI, invites you to read between the lines. What they say doesn't necessarily mean that they believe Bard or ChatGPT are capable of becoming Skynet and triggering machine rebellion.

However, they do argue that the impact of artificial intelligence on the daily lives of millions of people should not be left to chance. Consequently, they consider it necessary to study and put in place the necessary safety measures to ensure that, in the future, AI cannot be applied to destructive methods whose severity is on the same level as a nuclear war or a pandemic. A point that goes hand in hand with the debate on the regulation of this technology.

Seeking balance: The future of AI

If AI's risks really do correspond to other catastrophic events, it will be interesting to see how its potential regulation is approached. This depends on much more than just the goodwill of experts and scientists. The involvement of legislators will also be crucial; not to mention the companies now investing billions of dollars.

But let's not forget that the dangers of artificial intelligence are only part of a much broader story, and that the technology also has great potential benefits for humanity. Especially in education and healthcare. That's why, as Bill Gates recently said, we need to strike a . “We should try to balance fears about AI's drawbacks, which are understandable and valid, with its ability to improve people's lives. To make the most of this remarkable new technology, we need to protect against the risks and distribute the benefits to as many people as possible”, said the Microsoft co-founder.

3.8/5 - (13 votes)