AI firms warned to calculate threat of super intelligence or risk it escaping human control

Posted: 12th May 2025

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

View Full Article

Related Articles

Popular Articles

Implementing Artificial Intelligence (AI) within a business environment requires careful considerati...
ServiceHive was born out of a very specific kind of trauma: mediocre and poorly performing service d...
As AI drives a new wave of data center expansion, the focus is shifting from chips to the infrastruc...
Fortinet Secure SD-WAN supports cloud-first, security-sensitive, and global enterprises, as well as ...