AI firms warned to calculate threat of super intelligence or risk it escaping human control

Posted: 12th May 2025

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

View Full Article

Related Articles

Popular Articles

Recent hacks against some of the UK’s biggest retailers send a strong message to every sector,...
New research reveals that less than half of UK workers would comply with a full-time return-to-offic...
A distributor to the UK's major supermarkets has said it is being held to ransom by cyber hacker...
Modernised IT Services Our Managed IT services are tailor-made to each of our customers, and our ex...