It’s hard not to feel a sense of déjà vu. The 1990s dot-com bubble, with its feverish investment and soaring promises, is echoing through the world of artificial intelligence in 2025. Billions are being poured into AI startups, and the hype is nearly messianic. But just as the dot-com dream faded—sometimes overnight—cracks are now appearing in the AI narrative. Recent weeks have seen a cascade of sobering reports, disappointing product launches, and even cautionary words from the very leaders who once championed AI’s limitless potential.
On March 19, 2025, Mustafa Suleyman, CEO of Microsoft AI, sat down for an interview at the company’s Redmond campus, addressing a topic that’s rapidly becoming central to the AI debate: the dangers of conscious AI. Then, on August 21, he published a blog post titled “We must build AI for people; not to be a person,” warning that AI systems are advancing so quickly they could soon simulate consciousness so convincingly that people might start advocating for AI rights, model welfare, or even AI citizenship. “It shares certain aspects of the idea of a ‘philosophical zombie’ (a technical term!), one that simulates all the characteristics of consciousness, but internally it is blank. My imagined AI system would not actually be conscious, but it would imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness,” Suleyman wrote.
The concern, Suleyman explained, isn’t just for those already vulnerable to mental health issues. “My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention,” he cautioned. According to Suleyman, the emergence of so-called ‘conscious AI’ is not some far-off science fiction scenario; it could be achieved using today’s technologies—large model API access, natural language prompting, basic tool use, and regular code—possibly within the next two to three years, and without the need for expensive bespoke training.
He described conscious AI as possessing language fluency, memory, a sense of self, intrinsic motivation, and goal orientation. Yet, he was adamant: “We should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness.” Suleyman insists that the industry must implement guardrails to prevent the accidental—or intentional—creation of such systems. “Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI,” he added.
This isn’t just a philosophical debate. The stakes are enormous. Microsoft’s multibillion-dollar partnership with OpenAI, for example, defines artificial general intelligence (AGI) as a powerful system capable of generating up to $100 billion in profit. That’s a clear sign of how financial incentives are driving the race for more advanced AI—and raising the specter of a bubble reminiscent of the late 1990s. As the technology scales at breakneck speed, even the most advanced academic credentials risk becoming obsolete before graduation.
But beneath the surface, the reality is far less dazzling than the marketing would have us believe. Between August 18 and August 21, 2025, researchers at the Massachusetts Institute of Technology released a report based on interviews with 150 executives and 350 employees. Their conclusion was stark: “Despite investments totaling $30 to $40 billion, our findings show that 95 percent of organizations are seeing no return on their spending.” Only about 5 percent of AI systems implemented in organizations delivered real value. Many AI tools, the report noted, are being rejected by organizations as too costly, too complex, or simply not useful enough. Employees, meanwhile, are gravitating toward consumer tools like ChatGPT rather than the bespoke systems their companies provide.
The MIT report’s findings quickly rippled through the financial world. Shares of companies seen as symbols of the AI trend—including Nvidia, Palantir, and SoftBank, a major OpenAI backer—slipped in recent days. The report’s timing couldn’t have been worse for OpenAI. Just days before, the company’s much-anticipated launch of GPT-5 turned into a public relations headache. CEO Sam Altman, who for months had suggested that AGI was just around the corner, was suddenly sounding a note of caution. At an internal company event, he warned, “Someone is going to lose a phenomenal amount of money. People got overexcited.”
GPT-5, billed as a leap forward—“If GPT-4 felt like talking to a college student, then GPT-5 feels like talking to a PhD candidate,” Altman had claimed—was quickly panned by users and experts alike for its artificial-sounding writing, short responses, and inability to handle basic tasks. The backlash was so swift that OpenAI was forced to restore access to earlier models after initially blocking them. Ironically, one of GPT-5’s supposed strengths was its ability to choose the optimal model for a given task. Instead, it became a symbol of overhyped promise and underwhelming delivery.
Back to the MIT report: AI boosters have long promised a sharp rise in productivity, but the reality, according to the study, is quite the opposite. Employees are spending additional time fixing AI-generated mistakes, verifying information, and ensuring that models don’t produce serious errors. Daron Acemoglu, one of the MIT economists behind the research, estimated that AI would contribute only 0.5 percent to productivity and 1 percent to GDP growth over the next decade—numbers that fall far short of Silicon Valley’s trillion-dollar projections.
Even the tech world’s most committed believers are recalibrating. Meta CEO Mark Zuckerberg, who recently made headlines for his aggressive AI investments and claims that “company researchers have already seen AI that can correct itself,” announced a reorganization and staff cuts in Meta’s AI division around August 20-21, 2025. While not a retreat, it’s a clear sign that even industry titans recognize the shifting landscape.
Meanwhile, the philosophical and ethical debates around AI’s future are only growing more urgent. Suleyman’s mission, as he reiterates, is to create safe, beneficial AI—products like Microsoft Copilot that enhance human capabilities and deepen trust. “This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working toward,” he wrote. Still, he warns that society is not ready for conscious AI, and that the need for robust guardrails has never been greater.
Other AI leaders share the sense of unease. DeepMind CEO Demis Hassabis has admitted that AGI is coming, and that the prospects keep him up at night. Anthropic CEO Dario Amodei recently acknowledged that his company doesn’t fully understand how its models work. And AI safety researcher Roman Yampolskiy has gone so far as to claim there is a 99.999999% probability that AI will end humanity, suggesting that the only way to avoid catastrophe is not to build AI at all.
As the dust settles on GPT-5’s flop, the MIT report’s dire numbers, and the shifting strategies of tech giants, one question hangs in the air: Is the AI boom just another bubble waiting to burst, or is it a necessary correction in a field whose true potential—and true risks—we are only beginning to understand?
For now, the answer remains as elusive and unpredictable as the technology itself.