Today : Sep 13, 2025
World News
22 August 2025

AI-Generated Journalism Scandal Hits Major Publications

A fictitious writer used artificial intelligence to place fabricated stories in Wired, Business Insider, and other outlets, raising urgent questions about editorial safeguards and public trust.

In a saga that has sent shockwaves through the media industry, at least six major publications—including Wired, Business Insider, SF Gate, Cone Magazine, Index on Censorship, and Naked Politics—have been forced to remove articles after discovering they were written by a fictitious freelance journalist named Margaux Blanchard, whose work appears to have been generated by artificial intelligence. The revelations, first brought to light by an investigation from Press Gazette, have reignited urgent questions about the vulnerability of newsrooms to AI-generated content and the erosion of public trust in journalism.

The dominoes began to fall in early August 2025, when Dispatch editor Jacob Furedi received a pitch from Blanchard. The proposed story centered on Gravemont, a supposedly decommissioned mining town in rural Colorado that had, according to Blanchard, been transformed into one of the world’s most secretive training grounds for death investigation. The details were rich—perhaps too rich. Blanchard claimed to have reported on similar hidden sites before and to possess “clearance contacts in forensic circles,” but Furedi’s skepticism was piqued when he found no trace of Gravemont anywhere online. “You can’t make up a place,” Furedi later told Press Gazette, describing the pitch as “absolute bollocks.”

Furedi pressed Blanchard for evidence, requesting public records and more information about her sources. Blanchard responded with elaborate explanations, claiming she’d pieced together the story through interviews with retired forensic pathologists and former miners, and that Gravemont existed “just under the radar enough to evade coverage.” Yet, when asked for documentation, she sidestepped the request. Furedi’s suspicions solidified: “The pitch immediately sounded like it was written by ChatGPT,” he said, reflecting a growing concern among editors that AI-generated pitches are becoming increasingly sophisticated—and harder to spot.

It wasn’t just Dispatch that had been targeted. Press Gazette soon uncovered that at least six prominent outlets had published Blanchard’s work since April 2025. In May, Wired ran a feature by Blanchard about couples marrying in the virtual world of Minecraft and Roblox. The article, which included quotes from a purported 34-year-old ordained officiant in Chicago named Jessica Hu, was taken down two weeks later. “After an additional review of the article, Wired editorial leadership has determined this article does not meet our editorial standards. It has been removed,” read the editor’s note. No evidence could be found of Jessica Hu’s existence, nor of her supposed digital officiant career.

Similarly, Business Insider published two first-person essays by Blanchard in April 2025: “Remote work has been the best thing for me as a parent but the worst as a person,” and “I had my first kid at 45. I’m financially stable and have years of life experience to guide me.” After being alerted to the concerns by Press Gazette, Business Insider swiftly removed both pieces on August 19, 2025, replacing them with notes stating the articles “didn’t meet Business Insider’s standards.” Intriguingly, one of these essays remained accessible on the Dutch version of the site for a time, highlighting how quickly misinformation can spread—and linger—online.

Other outlets were caught in the web. SF Gate, a major Californian news site, took down an article on Disneyland superfans that profiled a TikTok influencer named Kayla Reed, who Blanchard claimed had over 100,000 followers. No such influencer could be found. Cone Magazine removed a piece about indie streetwear brands, which cited individuals and companies that similarly appeared to be inventions. Index on Censorship withdrew a dispatch from Guatemala, originally attributed to Blanchard as a “freelance journalist covering human rights,” after concluding the story “appears to have been written by AI.” As a spokesperson for the publication told Press Gazette: “We have sadly become the victim of the very thing we’ve warned against.”

The pattern was clear: Blanchard’s articles often contained case studies of people who could not be verified and quoted supposed experts with no online footprint. The stories were well-written, plausible, and—crucially—difficult to fact-check without significant editorial resources. This is precisely what makes AI-generated content so insidious, say industry observers. “It’s vital that readers can trust what they read,” Banseka Kayembe, director of Naked Politics, told Press Gazette. After initially keeping Blanchard’s article live, Naked Politics later removed it, acknowledging it “failed to meet the journalistic standards Naked Politics adheres to through our regulator Impress.”

The episode has exposed not just individual lapses, but systemic vulnerabilities. As newsroom budgets shrink and the pressure to produce content mounts, the temptation—or necessity—to rely on freelance contributions grows. Yet, as Furedi pointed out, “especially at a time when people are making cuts left, right and centre in this industry, and reportage is largely viewed as a luxury in legacy outlets, it tends to be op-eds that are staying and long-form deeply reported features are first for the chop.” The result: a fertile environment for AI-generated “slop” to slip through the cracks.

The financial incentives are not trivial. Wired reportedly pays upwards of $2,500 for long-form narrative reporting, while Business Insider commissions can fetch $230 per article. Though it’s unclear how much Blanchard—if that is even a real person—profited from the scheme, the ability to build a portfolio of published bylines, even temporarily, could open doors to more lucrative assignments elsewhere.

Beyond the embarrassment for individual outlets, the scandal has reignited debate about AI’s growing role in journalism. Research from the University of Kansas found that readers’ trust in news outlets declines when they know AI is involved in content production. Separate findings from Trusting News suggest that even disclosing AI’s role can damage credibility. As Furedi noted, the proliferation of AI-generated pitches is “symptomatic of the direction that certain types of journalism are going in.”

Some publications have responded by tightening editorial processes and reviewing how they verify the identities of contributors and the authenticity of sources. Index on Censorship announced it is “reviewing our processes,” while Naked Politics said it is “urgently looking into reviewing and updating our editorial processes to limit this occurrence in future.” Yet, as the Blanchard affair makes clear, the challenge is formidable—and growing by the day.

In the end, the case of Margaux Blanchard stands as a warning for the digital age: as AI-generated content becomes more sophisticated, the line between authentic journalism and fiction grows ever blurrier, demanding vigilance, skepticism, and renewed commitment to the standards that underpin public trust.