AI firms warned to calculate threat of super intelligence or risk it escaping human control

Posted: 12th May 2025

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

View Full Article

Related Articles

Popular Articles

As AI and digital technologies advance, the European cyber threat landscape continues to evolve, pre...
Cyber Risk Exposure Management (CREM), part of Trend Vision One™, proactively reduces cyber ri...
AI is driving a seismic shift in how we think about cloud infrastructure. As businesses rush to inte...
Ride the perfect Azure migration wave with Advantage PASS™ Time to unlock privileged Microsof...