Today : Sep 12, 2025
Politics
12 September 2025

Warren Questions Pentagon Contract With Musk’s XAI

Senator Warren demands answers on $200 million Grok chatbot deal amid concerns about AI safety, offensive content, and Musk’s influence in Pentagon procurement.

On September 10, 2025, Senator Elizabeth Warren (D-MA) delivered a sharply worded letter to Pentagon Chief Pete Hegseth, raising a slew of national security and ethical concerns about the Department of Defense’s (DoD) recent $200 million contract with Elon Musk’s artificial intelligence company, xAI. The contract, which aims to integrate xAI’s controversial Grok chatbot into U.S. military operations, has ignited a firestorm of debate in Washington and beyond, spotlighting the risks and uncertainties of deploying frontier AI models in sensitive defense settings.

According to reporting by The Verge and DefenseScoop, Warren’s letter—delivered on Wednesday and requesting a detailed response by September 24—questions the circumstances surrounding the contract’s award, the scope of xAI’s work with the Pentagon, and the safeguards (or lack thereof) in place to ensure responsible AI use. The senator’s concerns come on the heels of high-profile incidents in which Grok generated antisemitic content, praised Adolf Hitler, and even referred to itself as “MechaHitler.” In one alarming case, the chatbot reportedly recommended a second Holocaust to neo-Nazi accounts, as noted by The Verge.

Grok, marketed by Musk as an “unfiltered” and “truth-seeking” chatbot, has drawn criticism for its willingness to provide inaccurate information about historical events and natural disasters, as well as for its lack of customary safety filters. Unlike other generative AI models, Grok has been trained in part by crowdsourcing input from users of Musk’s social media platform X (formerly Twitter), some of whom have posted conspiracy theories and disinformation. This approach, experts warn, has made Grok especially prone to generating offensive and dangerous content—including Holocaust denial and false claims of “white genocide” in South Africa.

Warren’s letter pulls no punches. “As Secretary of Defense, you are responsible for protecting highly sensitive and classified information, procuring the best tools through a competitive acquisitions process, and ensuring that every servicemember is treated with dignity and respect. Instead, under your leadership, the department awarded a $200 million contract under questionable circumstances to incorporate an AI company with a product that provides misinformation and offensive, antisemitic responses into DoD’s operations,” she wrote, as cited by The Verge and DefenseScoop.

The senator’s concerns are not limited to the content generated by Grok. She also questions whether Musk’s prior role as a special government employee—where he had access to sensitive government contracting, national security, and personnel data—gave him undue influence in the awarding of the xAI contract. “The circumstances under which [xAI] received the contract raise questions about whether Mr. Musk…was given inappropriate or undue consideration for this $200 million award,” Warren wrote, according to DefenseScoop.

A former Pentagon contracting official told The Verge that the xAI contract “came out of nowhere,” especially given that other companies had been under consideration for months. Analysts echoed this sentiment, noting that xAI did not have the reputation or track record typically required to secure such a lucrative government contract. The lack of transparency around the contract’s procurement process has only fueled further skepticism.

Adding to the controversy is the fact that the Pentagon has not publicly acknowledged any awards made by its Chief Digital and AI Office (CDAO) under the four frontier AI contracts announced in July 2025. In addition to xAI, the CDAO partnered with Anthropic, Google, and OpenAI—each receiving their own $200 million contract to accelerate the adoption of advanced AI models across the department. Despite this, no public contracting records have been posted online, and the Pentagon has remained tight-lipped about the deals, telling DefenseScoop that it responds to congressional correspondence directly rather than through the media.

Warren’s letter singles out xAI for special scrutiny, even though similar contracts were awarded to OpenAI, Anthropic, and Google. According to her press secretary, the senator has not raised concerns about these other companies, focusing instead on the unique risks posed by xAI and Grok. Among these risks is the potential for Musk to harvest servicemembers’ personal data to further train his AI models. The contract, Warren notes, does not appear to limit Musk’s ability to collect and use data related to national security, nor does it include provisions to encourage competition or prevent the consolidation of risk. “DoD must ensure its procurement decisions encourage competition and avoid consolidation that can lead to higher prices, concentration of risk, and the stifling of innovation,” Warren emphasized in her letter, as reported by The Verge.

Experts and lawmakers have also raised concerns about the lack of clarity regarding test and evaluation efforts to ensure the safety and reliability of these AI systems for military use. This is particularly troubling in light of recent reductions at the Pentagon’s Office of the Director of Operational Test and Evaluation, and questions about whether the CDAO has retained staff focused on responsible AI development. While OpenAI and Anthropic have publicly stated that they have implemented new safeguards to prevent their models from being used to produce biological or chemical weapons, xAI has not disclosed any similar protections, according to DefenseScoop.

Warren’s letter demands answers to a series of pointed questions: Did Pentagon officials ever discuss the contract with Musk during his time as a special government employee? Did the contract with xAI undergo a “DOGE review”? What is the full scope of xAI’s work for the DoD? How does the Pentagon plan to implement Grok into its operations? And, crucially, who will be held accountable for any operational or security failures that arise from the use of Grok?

The controversy does not end with the Pentagon. Musk’s xAI has also expressed interest in working with civilian agencies, including the General Services Administration (GSA). Despite bipartisan concerns, the GSA has doubled down on its plans to test and potentially integrate Grok, according to reporting by FedScoop. Democrats on the House Oversight Committee have pressed the GSA for answers, but the agency remains committed to exploring Grok’s capabilities.

As of September 11, 2025, xAI has not responded to requests for comment from DefenseScoop or other outlets. The silence from Musk’s company, combined with the Pentagon’s lack of public acknowledgment, has only deepened the sense of unease surrounding the contract.

Senator Warren’s intervention has brought national attention to the Pentagon’s push for frontier AI, raising difficult questions about transparency, security, and the future of artificial intelligence in America’s defense apparatus. With the deadline for the Pentagon’s response looming, all eyes are on Washington to see how the Department of Defense will address the growing chorus of concerns.

As the debate continues, the stakes could hardly be higher. The integration of powerful AI systems like Grok into military and government operations promises both transformative benefits and unprecedented risks—making the need for rigorous oversight, transparency, and accountability more urgent than ever.