AI firms warned to calculate threat of super intelligence or risk it escaping human control

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.
Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.
Related Articles
- The Winter Cybersecurity Playbook: Stay Safe This Season
- The Cybersecurity And Resilience Bill Is Coming. Here's What It Means
- In an AI-first world, the future of cyber security is its workforce
- Start 2026 Strong with FREE Cyber Risk Visibility
- 2025 State of Operational Technology and Cybersecurity Report


