BRUSSELS (Realist English). The EU’s flagship AI Act falls far short of what is needed to avert existential risks posed by artificial intelligence, according to Stuart Russell, one of the world’s foremost AI researchers and a professor of computer science at UC Berkeley.
Speaking to Euractiv, Russell criticized both the content and enforcement mechanisms of the AI Act, warning that it lacks the strength to prevent potentially catastrophic outcomes. “Even if your system is incredibly dangerous… there’s nothing in the rules that say you can’t access the market,” he said.
Russell’s intervention comes amid growing concern that the EU is watering down its AI regulation just as the global regulatory mood shifts toward deregulation. Under pressure from major tech companies, the European Commission is reportedly considering delaying the implementation of the AI Act and weakening its provisions on general-purpose AI (GPAI) in a forthcoming Code of Practice expected this month.
Together with Nobel laureates Geoffrey Hinton and Daron Acemoglu, Russell co-signed a public letter urging Brussels to resist lobbying efforts and uphold mandatory third-party audits — a mechanism designed to prevent companies from self-certifying the safety of models like ChatGPT without independent scrutiny.
“To industry, it doesn’t matter what the document says,” Russell told Euractiv. “The companies want to have no regulation at all.”
In his view, fines — even those pegged to global revenues — are inadequate to deal with the scale of potential harm if advanced AI systems become uncontrollable. “Once you have systems that can take control of our civilization and planet, then fining a one-digit percentage is ridiculous,” he warned.
Russell is part of a growing group of AI pioneers who argue that artificial intelligence may pose an existential threat to humanity — a position critics dismiss as “doomerism.” But Russell pushes back, citing widespread concern among top AI researchers and executives. “If you look at the top five CEOs or top five AI researchers in the world — with the exception of Yann LeCun — every single one says: No, this is real,” he said.
Even European Commission President Ursula von der Leyen warned in 2023 that AI could “approach human reasoning” within a year, referring to the technology’s potential to threaten human survival.
Yet despite such statements, no meaningful regulatory steps have been taken, Russell said. He compared the current approach to “waiting for a Chernobyl-sized disaster” before acting. “Real regulation,” he added, “would mean requiring safety guarantees comparable to those for nuclear plants — but with even stricter thresholds.”
Such mathematical guarantees remain elusive. “Companies haven’t the faintest idea how their systems work,” Russell noted.
For now, his hope rests on one modest demand: enforceable external evaluations in the EU’s upcoming Code of Practice. “It wouldn’t be enough,” he said. “But it would help considerably.”