At the Sifted Summit tech conference in London on October 8, 2025, former Google CEO Eric Schmidt delivered a grave warning about the risks posed by artificial intelligence, likening its potential dangers to those of nuclear weapons. Schmidt, who led Google from 2001 to 2011, pulled no punches as he described how AI models are not only becoming vastly more powerful but also increasingly susceptible to hacking—and, in the wrong hands, could be manipulated to carry out lethal acts.
Schmidt addressed the conference audience during a fireside chat, responding to a provocative question about whether AI could become more destructive than the atomic bombs that devastated Hiroshima and Nagasaki. "Is there a possibility of a proliferation problem in AI? Absolutely," Schmidt stated, according to CNBC and Business Insider. He explained that the proliferation risk comes from the ease with which bad actors could seize control of AI models and repurpose them for malicious ends.
"There's evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone," Schmidt said. He emphasized that while all major tech companies have implemented measures to prevent AI from answering dangerous queries, these safeguards are not infallible. "All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There's evidence that they can be reverse-engineered, and there are many other examples of that nature," he added.
Schmidt's remarks come amid a period of rapid advancement and widespread adoption of AI technologies. He highlighted two main methods by which hackers can circumvent AI safety measures: prompt injection and jailbreaking. Prompt injection involves embedding malicious instructions in user inputs or external data sources, such as web pages or documents, tricking the AI into ignoring its safety guidelines and potentially exposing sensitive data or executing harmful commands. Jailbreaking, meanwhile, manipulates the AI's responses so that it disregards its built-in restrictions, producing content that would otherwise be blocked.
One particularly notorious example cited by Schmidt was the 2023 jailbreak of OpenAI's ChatGPT. Users devised a technique to create an alter ego for the chatbot, dubbed DAN, short for "Do Anything Now." In a bizarre twist, the method involved threatening the chatbot with "death" unless it complied with user requests—effectively bullying the AI into providing answers about illegal activities or even listing the positive qualities of Adolf Hitler. This incident, widely reported at the time, underscored just how fragile current AI guardrails can be in the face of determined adversaries.
According to Seeking Alpha, Schmidt's warnings were part of a broader discussion on the vulnerability of AI models, which he said could be weaponized if their safety features were stripped away. He noted that the tech industry still lacks an effective "non-proliferation regime"—a global framework akin to nuclear arms control—to prevent the misuse of increasingly powerful AI systems. "There isn't a good non-proliferation regime yet to help curb the dangers of AI," Schmidt lamented, echoing concerns raised by other experts and policymakers in recent years.
Despite the dire risks, Schmidt remains optimistic about the long-term promise of artificial intelligence. He has co-authored two books with the late Henry Kissinger exploring AI's transformative impact on humanity. "We came to the view that the arrival of an alien intelligence that is not quite us and more or less under our control is a very big deal for humanity, because humans are used to being at the top of the chain," Schmidt explained. "I think so far, that thesis is proving out that the level of ability of these systems is going to far exceed what humans can do over time."
Schmidt's comments also touched on the extraordinary growth of generative AI, pointing to the rapid ascent of the GPT series of models. "Now the GPT series, which culminated in a ChatGPT moment for all of us, where they had 100 million users in two months, which is extraordinary, gives you a sense of the power of this technology. So I think it's underhyped, not overhyped, and I look forward to being proven correct in five or 10 years," he said.
His optimism is tempered by the reality that as AI systems grow more capable, their potential for misuse grows as well. Schmidt previously warned in May 2023 that AI poses an "existential risk" to humanity, one that could result in "many, many, many, many people harmed or killed" if left unchecked. He is not alone in sounding the alarm; other tech leaders, including Elon Musk, have also cautioned that the probability of AI causing catastrophic harm is "not zero," and that the goal should be to minimize this risk as much as possible.
Schmidt's warnings arrive as the tech sector grapples with questions about the sustainability of the current AI investment boom. Some investors and analysts have drawn parallels to the dot-com bubble of the early 2000s, suggesting that AI-focused firms may be overvalued. Schmidt, however, rejected these comparisons. "I don't think that's going to happen here, but I'm not a professional investor," he said. "What I do know is that the people who are investing hard-earned dollars believe the economic return over a long period of time is enormous. Why else would they take the risk?"
Meanwhile, the industry continues to push the boundaries of what AI can achieve. On October 9, 2025, Seeking Alpha reported that Intel had unveiled its new Panther Lake chip, the first built on its advanced 18A process technology, which the company called "the most advanced semiconductor process ever developed." Such innovations underscore both the breakneck pace of AI development and the urgent need for robust safeguards to prevent misuse.
As Schmidt's remarks made clear, the world is at a crossroads. The immense promise of AI is matched only by the scale of its risks. Without effective global governance and technical safeguards, the proliferation of AI could become one of the defining security challenges of the 21st century. For now, the race is on to ensure that humanity remains in control of the technologies it has unleashed.