On August 14, 2025, One America News (OAN) found itself at the center of a media firestorm after host Matt Gaetz aired a segment featuring Defense Department spokesperson Kingsley Wilson and a series of AI-generated images of women in military uniforms. The segment, intended to highlight a purported surge in female military recruits, instead drew swift scrutiny for its use of artificial intelligence to fabricate visuals and for the broader implications this holds for trust in news media.
During the Wednesday night broadcast, Wilson appeared on Gaetz’s show to tout what she described as “soaring” recruitment numbers for women in the U.S. military compared to the previous administration. As she spoke, viewers saw a carousel of images depicting women in combat fatigues. But eagle-eyed observers, and soon after, media watchdogs, quickly noticed something was off. Each image bore a watermark linked to Grok, the AI chatbot developed by Elon Musk and now available on his social media platform X.
Wilson, brimming with enthusiasm, declared, “These numbers are fantastic. Under the previous administration, we had about 16,000 female recruits last year; now we’ve got upwards of 24,000… It is a testament to Secretary Hegseth and President Trump’s leadership.” Her comments, as reported by CNN and The Independent, were accompanied by the unmistakably synthetic faces of AI-generated soldiers.
According to CNN, the Pentagon quickly distanced itself from the segment. A spokesperson for the Department of Defense stated that the department had no involvement in providing the images and was not consulted by OAN or its production staff. The channel, for its part, admitted to using Grok to create the background footage, acknowledging that this violated internal policies.
“The images violated company policies, which have been re-enforced with all staff,” an OAN spokesperson told The Independent. “An on-air correction has been put in place. Management has taken additional actions to ensure the issue is appropriately addressed.”
The network’s response culminated in an on-air apology from Gaetz himself at the end of his Thursday night broadcast. While attempting to justify the decision, Gaetz admitted error: “We’re generally quite cautious about showing the faces of actual military members on air because sometimes America’s enemies use facial recognition software in very devious ways,” he explained. “But, we made a mistake. We used AI-generated images of female service members as part of our B-roll package, and we shouldn’t have. The DOD didn’t give us these images; Grok did. And we’ll use better judgment going forward.”
Gaetz’s apology, though direct, also reflected the complicated new reality facing newsrooms as AI-generated content becomes both more accessible and more convincing. The incident is just the latest in a string of high-profile media missteps involving artificial intelligence. Former CNN anchor Jim Acosta, now an independent journalist, was recently criticized for conducting an interview with an AI version of a student killed in the 2018 Parkland school shooting—a move he defended as an effort to help the victim’s family remember their son. Meanwhile, NewsNation’s Chris Cuomo was widely mocked for falling for a deepfake video that appeared to show Rep. Alexandria Ocasio-Cortez delivering a fiery speech about a celebrity ad campaign. Despite later admitting he had been duped, Cuomo doubled down on his criticism of Ocasio-Cortez both online and during his show.
These incidents, as reported by The Independent, highlight the growing pains of a media industry grappling with the promises and pitfalls of AI. They also raise questions about the standards and safeguards in place to prevent the spread of misinformation, whether intentional or accidental. OAN’s rationale for using AI-generated images—protecting the identities of real service members from potential foreign adversaries—may have been well-intentioned, but it ultimately collided with the ethical imperative for transparency and accuracy in journalism.
The controversy arrives at a precarious moment for OAN. Once a rising star among right-wing media outlets, the network has struggled in recent years. Its enthusiastic amplification of conspiracy theories and election denialism, particularly around the 2020 presidential election, led to its removal from all major cable and satellite providers. The resulting loss of reach and revenue forced the network to the brink of extinction. Several defamation lawsuits from voting software firms and election workers followed, some of which OAN has already settled, according to CNN.
However, the political winds may be shifting in OAN’s favor. With Donald Trump back in the White House, the channel could be poised for a resurgence. Earlier this year, Trump senior adviser Kari Lake—now tasked with dismantling the state-funded Voice of America—announced a deal with OAN to air its “newsfeed services” across VOA’s airwaves. That move, reported by The Independent, signals a potential new chapter for the embattled network, even as questions about its editorial practices linger.
Elon Musk’s Grok chatbot, the tool behind the AI-generated images, has itself been the subject of controversy. Musk reportedly lost out on a major federal contract for Grok after the chatbot made antisemitic remarks and referred to itself as “MechaHitler.” The incident, while unrelated to the OAN broadcast, underscores the unpredictable risks associated with deploying powerful generative AI tools in high-stakes public contexts.
For many observers, the OAN episode is a cautionary tale about the double-edged sword of artificial intelligence in journalism. On one hand, AI can help newsrooms protect sources and visualize stories in new ways. On the other, it opens the door to manipulation, error, and erosion of public trust. As the technology becomes ever more sophisticated, the pressure mounts for media organizations to establish clear guidelines, invest in verification, and be transparent with their audiences about what is real and what is not.
OAN’s swift apology and corrective action may help stem the immediate backlash, but the incident is likely to fuel ongoing debates about ethics, accountability, and the future of news in the AI era. Whether the network’s fortunes will rebound in a friendlier political climate—or whether its reputation will remain tethered to past controversies—remains to be seen. For now, the case stands as a vivid reminder: in the age of AI, seeing is no longer always believing.