Today : Dec 19, 2025
Technology
26 September 2025

AI Doomers Warn Of Superintelligence As Altman Predicts 2030

With AI critics sounding alarms and OpenAI’s Sam Altman forecasting superintelligent systems within five years, global leaders and regulators face mounting pressure to balance innovation with existential risk.

The debate over artificial intelligence’s future has reached a fever pitch, with concerns about the rise of superintelligent AI sparking urgent discussions among technology leaders, policymakers, and the public. On September 25, 2025, the so-called “AI Doomers”—a group of prominent figures warning of a potential superintelligence apocalypse—amplified their calls for stricter regulations and international treaties designed to keep AI development aligned with human interests. Their warnings come as the pace of AI advancement accelerates, raising fears that humanity could soon be outpaced by its own creations.

At the center of this debate stands Sam Altman, CEO of OpenAI, whose recent appearances in Berlin and Washington, D.C., have placed him in the spotlight. According to POLITICO Magazine, Altman attended a meeting of the White House Task Force on Artificial Intelligence Education on September 4, 2025, before traveling to Berlin to receive the prestigious Axel Springer Award. While Altman is famously cautious about making predictions, he didn’t shy away from addressing the looming question of when superintelligent AI—systems smarter than humans in all respects—might actually arrive.

“I would certainly say by the end of this decade, so, by 2030, if we don’t have models that are extraordinarily capable and do things that we ourselves cannot do, I’d be very surprised,” Altman told the Axel Springer Global Reporters Network. His statement underscored a sense of inevitability and urgency that has come to define the current era of AI development. According to AI Doomers, as reported by Agents of Concern in the Age of AI, the rapid progress of artificial intelligence could soon propel us into uncharted territory, where machines possess abilities far beyond human comprehension.

Altman’s perspective is nuanced. While he acknowledged that GPT-5, OpenAI’s latest model, is already “smarter than me at least, and I think a lot of other people too,” he also pointed out its limitations. “It’s also not able to do a lot of things that humans could do easily. And I think that will be the course of things for a while,” he explained. In his view, AI systems will continue to excel at some tasks while struggling with others, creating a future where humans and AI collaborate, each bringing unique strengths to the table.

But the trajectory of AI’s capability, Altman warned, remains “extremely steep.” In just a few years since the launch of ChatGPT, the models have become dramatically more powerful. “I see no sign of that slowing down. I think in another couple of years, it will become very plausible for AI to make, for example, scientific discoveries that humans cannot make on their own. To me, that’ll start to feel like something we could properly call superintelligence,” he said.

This rapid progress has fueled existential concerns. AI Doomers argue that as superintelligent AI systems inch closer, the risk of losing control grows. According to Agents of Concern in the Age of AI, Silicon Valley leaders are pushing for international regulations and treaties to ensure AI remains aligned with human values. Without such safeguards, they warn, the consequences could be catastrophic—an “apocalypse” where humanity’s fate is no longer in its own hands.

Public reactions to these warnings have been mixed. Some see the rise of superintelligent AI as a threat to jobs, privacy, and even civilization itself. Others, like Altman, adopt a more measured optimism. When pressed about the possibility that AI might treat humans like ants—an analogy famously used by AI researcher Eliezer Yudkowsky—Altman offered a different vision. “My co-founder, Ilya Sutskever, once said that he hoped that the way that an AGI would treat humanity or all AGIs would treat humanity is like a loving parent. And given the way you asked that question, it came to mind. I think it’s a particularly beautiful framing,” he said. Still, he cautioned that even without intentionality, AI could have unintended side effects, making alignment with human values crucial.

The economic implications of AI’s rise are no less significant. Altman estimated that “30, 40 percent of the tasks that happen in the economy today get done by AI in the not very distant future.” He advised that the most important skill for the next generation is “learning how to learn, of learning to adapt, learning to be resilient to a lot of change.” In his view, just as past technological revolutions have upended the job market, AI will reshape the world of work, eliminating some roles while creating new ones. “I’m so confident that people will still be the center of the story for each other,” he added, expressing faith in human creativity and adaptability.

Yet, the calls for regulation grow louder. According to Agents of Concern in the Age of AI, key figures have proposed international treaties to address AI safety, reflecting mounting anxiety that national policies alone may be insufficient. In Europe, the debate over the AI Act has intensified. Altman, when asked about the proposed regulations, struck a diplomatic tone. “Obviously, that is a question for the European people and the European policymakers,” he said, noting that German companies expressed both concerns about overregulation and hope that sensible rules could protect people and foster innovation.

Altman’s visit to Germany was marked by discussions about building AI infrastructure tailored for the German market, despite challenges like high energy costs. “The importance of delivering AI in Germany to German businesses and German consumers is very important. Germany is our biggest market in Europe. It’s our fifth biggest market in the entire world. Virtually all young German people use ChatGPT. So AI is here and people are getting value from it,” he noted. When asked about Germany’s nuclear phase-out, Altman voiced his support for nuclear energy as a promising approach to meeting AI’s growing energy needs, though he emphasized that such decisions ultimately rest with local stakeholders.

As for the political climate, Altman observed a shift in the U.S. tech industry’s relationship with government, particularly with President Donald Trump. “The ability to build infrastructure in the United States, which has been quite difficult and quite important to companies like ours, President Trump has done an amazing job of supporting. And a more general pro-business climate and pro-tech climate has been also a welcome change,” he said. However, he insisted that the tech industry should work with whoever occupies the White House, regardless of party.

Despite the swirl of speculation, Altman dismissed the idea that artificial intelligence could or should replace human leaders. “I don’t think people want that anytime soon. What I expect, though, is that presidents and leaders around the world will use AI more and more to help them with complex decisions. But I think we all still want a human signing off on that at some point,” he remarked.

In the end, the debate over superintelligent AI is far from settled. As the technology marches forward, the world must grapple with profound questions about control, safety, and the very nature of progress. For now, leaders like Sam Altman urge a careful balance—embracing the promise of AI while safeguarding against its perils. The stakes, after all, could not be higher.