AI firms warned to calculate threat of super intelligence or risk it escaping human control

Posted: 12th May 2025

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

View Full Article

Related Articles

Popular Articles

The holiday season is a busy time of year for shopping and traveling—and it's also a busy time...
In 2024, the UK was targeted by cyber-attacks more than any other country in Europe, with ...
As enterprises worldwide continue migrating to cloud infrastructure, adopt AI/ML, and handle ever-la...
Social media site X, music streaming service Spotify and AI chatbot ChatGPT were among several platf...