The artificial intelligence (AI) industry is experiencing a period of rapid change and mounting concern, as insiders at leading companies like OpenAI and Anthropic sound the alarm about the technology’s potential risks—even as business adoption of these tools surges to new heights. In the past week alone, a flurry of resignations, viral warnings, and strategic shifts at the top AI firms have signaled both unprecedented growth and deepening unease about the future of advanced AI models.
According to reporting by Axios, the sense of urgency is palpable among those closest to the technology. On February 10, 2026, an Anthropic researcher publicly announced his departure, explaining his decision as a desire to “write poetry about the place we find ourselves”—a poetic way to capture the gravity of the moment. Only days later, a researcher at OpenAI also left, this time explicitly citing ethical concerns. Hieu Pham, another OpenAI employee, took to X (formerly Twitter) on February 11 to declare, “I finally feel the existential threat that AI is posing.” These are not isolated voices: Jason Calacanis, a prominent tech investor and co-host of the All-In podcast, observed on X, “I’ve never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI.”
Perhaps the clearest sign of the anxiety rippling through the AI world came from entrepreneur Matt Shumer, whose viral post on February 11 compared the current AI moment to the eve of the COVID-19 pandemic. The post, which laid out the risks of AI fundamentally reshaping jobs and society, racked up a staggering 56 million views in just 36 hours. The message resonated widely: AI isn’t just evolving—it’s accelerating at a pace that has even its creators scrambling to keep up.
Yet, while the warnings are growing louder, the reality on the ground is more nuanced. As Axios notes, most people working at OpenAI, Anthropic, and similar companies remain optimistic that they can steer the technology safely, without triggering mass job loss or societal upheaval. Still, both companies have acknowledged the risks. Anthropic recently published a report highlighting the dangers of AI being used in heinous crimes, including the creation of chemical weapons—without any human intervention. This so-called “sabotage report” examined scenarios where AI operates autonomously, underscoring the potential for harm if left unchecked.
OpenAI, meanwhile, made headlines for dismantling its mission alignment team in early February 2026. According to Platformer, the seven-member team—originally created in 2024 to ensure that artificial general intelligence (AGI) would benefit all of humanity—was reassigned to other groups within the company. Joshua Achiam, who led the mission alignment effort, transitioned to a new role as OpenAI’s “chief futurist.” The move comes amid broader leadership changes at OpenAI; CEO Sam Altman had previously tapped Achiam to lead the alignment team, and the company saw the surprise departure of chief technical officer Mira Murati and two key researchers in late 2024, though one of those researchers, Barret Zoph, has since returned.
For many in the AI industry, these developments are more than just inside baseball—they’re signs of a technology in flux, with profound implications for the economy and society at large. The latest generation of AI models, including Anthropic’s Claude and OpenAI’s ChatGPT, are not only improving rapidly but are also showing the ability to build complex products themselves and even improve their own work, sometimes with minimal human oversight. OpenAI’s last model helped train itself, while Anthropic’s viral Cowork tool was, astonishingly, built by the AI itself. These leaps in autonomy have prompted a wave of soul-searching among researchers and executives alike.
Yet, as the existential debate rages, business adoption of AI tools is booming. According to a February 2026 report from fintech giant Ramp, 46.8 percent of its U.S.-based business customers now pay for access to AI tools—a dramatic increase from previous years. Anthropic’s Claude, in particular, is on a tear: one in five U.S. businesses on Ramp now pays for Anthropic, up from just one in 25 a year ago. In January 2026 alone, Anthropic’s share of Ramp’s business customers jumped from 16.7 percent to 19.5 percent, while OpenAI’s market share dipped slightly from 36.8 to 35.9 percent.
At first glance, it might appear that Anthropic’s rise is coming at OpenAI’s expense. But the data tells a different story. Ramp economist Ara Kharazian explained that churn rates for both companies—roughly 4 percent of users canceling subscriptions each month—are nearly identical. “If businesses were switching from OpenAI to Anthropic, you’d see that in OpenAI’s churn,” Kharazian wrote. “You don’t.” Instead, it appears that most of Anthropic’s growth is coming from businesses that already use OpenAI and are now adding Anthropic as a second provider. In fact, around 79 percent of Anthropic’s customers also pay for OpenAI. “The market is young enough that businesses are buying from more than one model company,” Kharazian observed. “Engineers prefer one model, the sales team uses another, and the company pays for both.”
This dual adoption reflects a broader trend: AI is not a zero-sum game, at least not yet. Companies are experimenting, hedging their bets, and seeking out the best tools for different teams and functions. As Kharazian put it, “Almost all Anthropic customers already use OpenAI, but not even a majority of OpenAI customers are on Anthropic.” The implication is that the AI market remains fluid, with plenty of room for multiple players to grow—even as competition heats up and product lines begin to blur. “It kind of feels like OpenAI is working on being a little bit more like Anthropic, and Anthropic is working on being a little bit more like OpenAI,” Kharazian noted.
Despite the business world’s obsession with AI, the debate over its risks and rewards has yet to fully register in Washington. As Axios pointed out, the latest round of insider warnings has made little impression at the White House or in Congress, where policy discussions about AI safety and regulation remain sporadic at best. That disconnect is striking, given the technology’s potential to disrupt industries ranging from software engineering to legal services—and the growing evidence that AI models can now operate and improve themselves with little human input.
For now, the AI disruption is here, and it’s happening faster and more broadly than even many experts anticipated. As insiders wrestle with the ethical dilemmas and business leaders race to adopt the latest tools, one thing is clear: the stakes are higher than ever, and the world is watching to see how the AI story unfolds.