In a move that has sent shockwaves through both the technology and finance sectors, the U.S. government is actively negotiating with Anthropic, the artificial intelligence firm behind the powerful new Claude Mythos model, about deploying the tool within major federal agencies. This comes despite Anthropic’s ongoing legal disputes with the Pentagon and its controversial status as a supply chain risk, according to sources cited by Axios and Bloomberg.
Mythos, announced by Anthropic on April 7, 2026, is not just another AI chatbot. Instead, it represents a leap in cybersecurity capabilities—so much so that it has been withheld from general public release and offered only to a select group of companies and organizations. Anthropic’s stated goal is to allow these organizations, including tech giants like Amazon, Microsoft, and Nvidia, to test Mythos’ unprecedented abilities to find and exploit vulnerabilities in software, operating systems, and critical infrastructure, as reported by Reuters and the BBC.
The stakes are high. The White House Office of Management and Budget (OMB) confirmed in an April 14 email to Cabinet officials that it is setting up protections to allow major federal agencies to access Mythos. Gregory Barbaccia, the federal chief information officer, wrote, “We’re working closely with model providers, other industry partners, and the intelligence community to ensure the appropriate guardrails and safeguards are in place before potentially releasing a modified version of the model to agencies.” However, the email stopped short of providing a definitive timeline or specifying exactly how agencies might use the tool, according to Bloomberg News.
This careful approach comes as various branches of the U.S. government, including the Departments of Energy and Treasury, express urgent interest in using Mythos to defend critical sectors like the electric grid and the financial system. “All the intel agencies use Anthropic. Every agency except War wants to,” one administration official told Axios, referring to the Pentagon’s ongoing ban on Anthropic’s software. The official continued, “If you’re the Department of Energy, you don’t give a f*** about that. You’re worried about the Chinese attacking the energy grid. So you want Anthropic.”
Yet the Pentagon remains unmoved, having labeled Anthropic a supply chain risk and barred the company from military contracts. The root of the dispute lies in Anthropic’s refusal to allow its models to be used for mass surveillance or the development of fully autonomous weapons—restrictions the Pentagon deems too vague and limiting. “There’s progress with the White House. There’s not progress with [the Department of] War,” another administration official told Axios. Both Anthropic and the Pentagon have declined to comment further.
Outside the U.S., the ripples of Mythos’ emergence are being felt just as strongly. At the International Monetary Fund (IMF) meetings in Washington DC this week, finance ministers and central bankers from around the globe voiced grave concerns about the risks posed by Anthropic’s AI. Canadian Finance Minister François-Philippe Champagne told the BBC, “Certainly it is serious enough to warrant the attention of all the finance ministers... The issue that we’re facing with Anthropic is that it’s the unknown, unknown.” He emphasized the need for safeguards and robust processes to ensure the resiliency of global financial systems.
Similarly, the chief executive of Barclays, CS Venkatakrishnan, told the BBC, “It’s serious enough that people have to worry. We have to understand it better, and we have to understand the vulnerabilities that are being exposed and fix them quickly.” The governor of the Bank of England, Andrew Bailey, echoed these concerns, warning, “We are having to look very carefully now what this latest AI development could mean for the risk of cyber crime.” Bailey, who also chairs the Financial Stability Board, noted the challenge regulators face in deciding when and how to intervene: “If you go too early you risk missing the target and you risk distorting the evolution, and if you go too late things can get out of control.”
Anthropic’s own stance on Mythos is clear: the company believes its model’s ability to identify vulnerabilities could be a double-edged sword. In a recent blog post, Anthropic wrote, “AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. The fallout—for economies, public safety, and national security—could be severe.”
To mitigate these risks, Anthropic has launched Project Glasswing, an initiative that limits Mythos’ use to a tightly controlled group of trusted partners. The company has also released a new version of its existing model, Claude Opus, designed to allow for broader—but less risky—testing of Mythos’ cybersecurity capabilities. The UK’s AI Security Institute, the only independent group so far to publish a report on Mythos, found that while the model is indeed powerful at exploiting weakly secured systems, it is not dramatically more advanced than its predecessor, Opus 4. “Our testing shows that Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed,” the Institute’s researchers concluded.
Despite the caution, governments and financial institutions are racing to get ahead of the threat. The U.S. Treasury has urged major banks to test their systems against Mythos, and British banks are set to receive access to the model in the coming week, according to Bloomberg. Pip White, Anthropic’s head of UK, Ireland, and northern Europe operations, told the network, “As you would expect, the engagement I have had from UK CEOs in the last week has been significant.”
Christine Lagarde, president of the European Central Bank, summed up the dilemma facing policymakers: “The development we’ve seen with Anthropic and Mythos is a good example of a responsible company that is suddenly thinking: ‘Ah, that could be really good’—but if it falls in the wrong hands, it could be really bad. Everybody is keen to have a framework within which to operate... We need to work on that.”
Meanwhile, the financial world braces for more. Industry sources told the BBC that another prominent U.S. AI company may soon release a similarly powerful model, but without the same safeguards that Anthropic has put in place. James Wise, chair of the Sovereign AI unit at Balderton Capital, noted, “We hope the models that expose vulnerabilities are also the models which will fix them.”
For now, the U.S. government’s cautious engagement with Anthropic’s Mythos model underscores both the promise and peril of frontier AI. As the world’s most powerful institutions race to secure their systems, the challenge will be to strike the right balance between harnessing innovation and protecting against its most dangerous consequences.