AI firms warned to calculate threat of super intelligence or risk it escaping human control

Posted: 12th May 2025

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

View Full Article

Related Articles

Popular Articles

Security teams are under growing pressure - from rising threats and compliance demands to analyst fa...
Modernised IT Services Our Managed IT services are tailor-made to each of our customers, and our ex...
Recent high-profile cloud data loss stories should serve as a reminder to properly back up your file...
Growing use of social engineering capabilities by cyber adversaries across OT (operational technolog...