On September 26, 2025, OpenAI CEO Sam Altman made headlines with a bold prediction: artificial intelligence will surpass human intelligence by the end of this decade. Speaking at a recent event, Altman laid out his vision for the future of AI, asserting that the world is on the cusp of a technological revolution that could outpace even the most ambitious expectations. According to Business Insider, Altman stated, “We are on track to see AI exceed human intelligence within the next five years,” and suggested that artificial general intelligence (AGI)—systems capable of performing tasks at or beyond human level—could emerge as soon as 2029.
Altman’s confidence in AI’s trajectory is rooted in the rapid progress of models like GPT-5. In his words, “In many ways, GPT-5 is already smarter than me, at least, I think a lot of other people too. GPT-5 is capable of doing incredible things that many people find very impressive. But it's also not able to do a lot of things that humans could do easily.” This admission, reported by Business Insider, underscores both the promise and the current limitations of today’s most advanced AI systems.
Just one day before GPT-5’s release in early August 2025, Altman posted an image of the Death Star on social media—a cheeky nod to the model’s potential impact. In a July interview, he didn’t mince words about the significance of OpenAI’s latest creation: “We have discovered, invented, whatever you want to call it, something extraordinary that is going to reshape the course of human history.” He even compared OpenAI’s research to the Manhattan Project, expressing a sense of awe and humility: “I felt useless compared with OpenAI’s newest invention.”
For years, Altman and other top technologists—like Anthropic CEO Dario Amodei and professors Yoshua Bengio and Stuart Russell—have dreamed of, and sometimes feared, the advent of superintelligent systems. The pursuit of AGI has become a kind of holy grail for the AI community, with the stakes extending far beyond the tech industry. According to Foreign Affairs, Altman told then-President Donald Trump that AGI would be achieved within his term, urging Washington to prepare for the profound geopolitical implications. The message landed: over the past two years, U.S. lawmakers from both parties have ramped up discussions about AGI, exploring policies to harness its potential or limit its risks.
Political attention on AGI has reached a fever pitch. In September 2024, Senator Richard Blumenthal declared in a hearing on AI oversight that AGI is “here and now—one to three years has been the latest prediction.” Senator Mike Rounds introduced a bill requiring the Pentagon to establish an AGI steering committee. And in June 2025, Representative Jill Tokuda of Hawaii described artificial superintelligence as “one of the largest existential threats that we face.” The bipartisan U.S.-China Economic and Security Review Commission’s 2024 report called for a Manhattan Project–level effort to ensure the United States achieved AGI first. Former Biden administration officials issued executive orders regulating AI partly out of concern that AGI is on the horizon, while President Trump’s AI Action Plan, released in July, emphasizes frontier AI and technological dominance.
But is superintelligence really just around the corner? Not everyone is convinced. As Foreign Affairs notes, some prominent computer scientists, such as Andrew Ng, question whether AGI will ever be created. Many experts point to the persistent shortcomings of current AI models—shallow reasoning, brittle generalization, lack of long-term memory, and hallucinations. Even GPT-5, despite its hype, is described as “an advancement rather than a transformative breakthrough.” Altman himself, in August, tempered expectations by stating that AGI is “not a useful concept.” The reality, it seems, is that AI progress is likely to be iterative, with each step building on the last, rather than a sudden leap to superintelligence.
Altman’s acknowledgment of AI’s limitations is paired with a keen awareness of its potential impact on the workforce. On September 25, 2025, he explained that AI is evolving rapidly and could replace 40% of current work. As reported by Business Insider, Altman discussed not only the technological advances but also the importance of regulation and safety in AI development. “If the technology is not governed properly, it will lead to existential risks,” he warned, echoing the concerns of other industry leaders and calling for global regulatory frameworks to ensure AI’s safe development.
As AI adoption accelerates worldwide, governments and companies are investing heavily in generative AI, robotics, and autonomous systems. The United States and China are locked in a high-stakes race for AI leadership. China, for its part, is advancing rapidly in AI integration and robotics, aiming for widespread industry-specific AI adoption by 2027 and full infrastructure integration by 2030. The Chinese government’s “AI Plus Initiative” is designed to make AI a core part of the country’s infrastructure within five years, as reported by Foreign Affairs.
Yet, for all the excitement, the practical challenges of AI adoption remain formidable. More than 80% of AI projects fail to transition from prototype to full capability, and 88% of pilots never reach production, according to industry surveys cited by Foreign Affairs. Gartner projects that 40% of autonomous AI system deployments will be scrapped by 2027. The article emphasizes that the U.S. should focus less on racing to a mythical AGI finish line and more on steady, practical investments in AI adoption, infrastructure modernization, and AI literacy initiatives.
To maintain leadership, the United States must also invest in universities and researchers who can advance AI safety, efficiency, and effectiveness. The Trump administration’s plan to expand support for the National AI Research Resource—a consortium providing researchers, educators, and students with advanced AI tools—is highlighted as a critical step. Policies supporting AI research and development, such as those mandated by the 2022 CHIPS and Science Act, are expected to lead to more sophisticated algorithms and, eventually, more advanced systems.
Despite the allure of superintelligence, the consensus among many experts is that progress in AI will be measured and adoption-focused. The analogy to electricity is apt: while the invention was revolutionary, its transformative power was only realized through decades of incremental improvements and widespread adoption. The same may hold true for AI.
As the world stands at the threshold of unprecedented technological change, the race is not necessarily to the swiftest, but to those who can integrate, adapt, and scale AI’s capabilities in meaningful ways. The future, it seems, will be shaped not by a single, dramatic breakthrough, but by the collective efforts of innovators, policymakers, and societies to harness AI’s potential responsibly and effectively.