In a stunning turn of events in the artificial intelligence world, Anthropic’s Claude has overtaken OpenAI’s ChatGPT to become the most downloaded free app in the United States on Apple’s App Store. The rapid ascent of Claude to the top spot comes amid a heated controversy involving the Pentagon, government contracts, and the ethical use of AI—an episode that has not only shaken the tech industry but also galvanized public opinion and sparked a digital exodus from ChatGPT.
According to Mashable and Axios, Claude’s rise to number one in app downloads was triggered by a very public standoff between Anthropic and the U.S. Department of Defense (often referred to in these reports as the Department of War). The controversy began when OpenAI CEO Sam Altman announced via X (formerly Twitter) that OpenAI would partner with the Department of Defense to deploy artificial intelligence through classified military networks. This move, coming just after Anthropic lost its own Pentagon contract, set off a firestorm on social media and within the tech community.
Anthropic’s CEO, Dario Amodei, had taken a strong public stance against the unrestricted use of AI by governments. He specifically warned of the dangers posed by “mass domestic surveillance” and “fully autonomous weapons” powered by artificial intelligence. As reported by Mashable, Amodei’s position resonated with a public increasingly wary of the speed and scope of AI advancements. In his words, “We will challenge any supply chain risk designation in court, and no intimidation or punishment from the Department of War will force us to cave.”
This principled stand, however, drew sharp criticism from former President Donald Trump. Taking to Truth Social, Trump lambasted Anthropic’s leadership, writing, “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.” This comment was swiftly followed by a move from Defense Secretary Pete Hegseth, who sought to designate Anthropic as a “supply-chain risk to national security”—an unprecedented label for a U.S. tech company that would effectively bar it from future government contracts.
The fallout from this clash was immediate and dramatic. Anthropic lost its Pentagon contract on February 28, 2026, after refusing to loosen safeguards for military use of its AI model, Claude. The company drew clear red lines, stating it would not allow its technology to be used for mass surveillance or autonomous lethal weapons. Hours after Anthropic’s contract loss, OpenAI announced its own deal with the Pentagon, with the Department of Defense agreeing to similar red lines as those set by Anthropic.
As the government and tech giants wrestled over the future of AI, the public’s response was swift and decisive. Social media movements sprang up almost overnight. The Instagram account “quitGPT” gained 10,000 followers in the wake of the news, according to its operator. On Reddit, a post exhorting users to “Cancel and Delete ChatGPT!!!” racked up 30,000 upvotes. A viral video even showed chalk art outside Anthropic’s San Francisco offices, reading “you give us courage,” a testament to the company’s growing grassroots support.
Some users pointed to political donations as part of their rationale for ditching ChatGPT. In particular, OpenAI president Greg Brockman’s previous $25 million donation to a pro-Trump super PAC was cited as a factor by those shifting their loyalty to Anthropic. The public’s sensitivity to both ethical and political issues surrounding AI was on full display.
Meanwhile, the numbers tell a story of surging momentum for Anthropic. According to an Anthropic spokesperson cited by Mashable, free users of Claude have increased by more than 60% since January 2026. Daily signups have tripled since November 2025, breaking all-time records every day during the week of the controversy. Paid subscribers have more than doubled this year. Before Super Bowl LX, which took place on February 2, 2026, Anthropic ranked #42 in the U.S. app charts. Since then, it has consistently placed in the top 10, culminating in its recent climb to the number one spot.
Data from OpenRouter, as reported by Axios, shows that Claude’s Sonnet 4.5 ranked fifth among AI models for February 2026, with the top spot going to China-based MiniMax. In fact, twelve different AI models outpaced OpenAI’s offerings over the last month, suggesting a rapidly diversifying market. Still, ChatGPT remains a close second on the app store charts and retains what many consider a “first-mover advantage” in the AI space.
The episode has also sparked a rare show of solidarity within the tech community. More than 700 employees from both Google and OpenAI signed an open letter titled “We Will Not Be Divided,” which states, “We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.” This letter, widely circulated online, underscores the depth of concern among AI developers about the ethical direction of their industry.
For Anthropic, the long-term business impact of being blacklisted by the Pentagon remains uncertain. Losing the ability to secure government contracts could be a significant blow. However, in the short term, the company’s willingness to draw ethical lines—and the resulting backlash from political leaders—has fueled a surge in popularity, especially among everyday users and enterprise customers. Recent product launches tailored for swift business adoption have only amplified Claude’s appeal in the corporate world.
It’s worth noting that the broader context of this battle includes ongoing legal disputes. Mashable disclosed that its parent company, Ziff Davis, filed a lawsuit against OpenAI in April 2026, alleging copyright infringement in the training and operation of its AI systems. This legal wrangling adds yet another layer of complexity to the already fraught landscape of AI development, regulation, and competition.
As the dust settles, it’s clear that the fight for the soul of artificial intelligence is far from over. The latest flare-up between Anthropic, OpenAI, and the U.S. government has shown that public opinion still carries significant weight—and that ethical considerations can drive real-world change, even in the fast-moving world of AI. With new players emerging and alliances shifting, the only certainty is that the story of AI’s future will be shaped by both technical innovation and the values we choose to uphold.