On December 11, 2025, The Washington Post made headlines for a bold technological leap: the launch of “Your Personal Podcast,” an artificial intelligence-powered audio platform that lets users customize their news experience by choosing topics, hosts, and even the length of each episode. The promise? A podcast tailored to each listener’s interests, delivered in real time by AI-generated voices. But within less than 48 hours, what was meant to be a showcase of cutting-edge journalism instead became a cautionary tale about the pitfalls of AI in the newsroom.
The rollout, as reported by Semafor, quickly turned chaotic. Editors and staffers at the Post were stunned to discover that the AI-driven podcasts were riddled with errors—some of them egregious. The AI hosts not only mispronounced names but also invented quotes, misattributed information, and, perhaps most alarmingly, editorialized by presenting a source’s words as the newspaper’s official stance. In one particularly jarring instance, the AI podcast announced it would discuss “whether or not people with intellectual disabilities should be executed,” offering no context until much later in the episode. Readers noticed, and so did the paper’s own journalists.
“It is truly astonishing that this was allowed to go forward at all,” one editor fumed in internal Slack messages obtained by Semafor. “Never would I have imagined that The Washington Post would deliberately warp its own journalism and then push these errors out to our audience at scale.” Another staffer was blunt: “It’s a total disaster. I think the newsroom is embarrassed.” The Post’s head of standards, Karen Pensiero, echoed the frustration, calling the situation “frustrating for all of us.”
The fallout wasn’t limited to the newsroom. The Washington Post Guild, representing the paper’s journalists and staff, issued a statement to Beritaja: “We are concerned about this new product and its rollout,” the Guild said, warning that it undermines the Post’s mission and the value of its journalists’ work. The Guild pointed out a core contradiction: “Why would we support any technology that is held to a different, lower standard?”
From a technical standpoint, “Your Personal Podcast” was ambitious. According to Bailey Kattleman, the Post’s head of product and design, the podcast uses large language models (LLMs) to convert articles into audio scripts. A second LLM vets the scripts for accuracy before the final narration is stitched together and delivered by advanced AI voice clones. The project was developed in partnership with Eleven Labs, a company specializing in AI voice technology. Kattleman described the podcast as “an AI-powered audio briefing experience,” emphasizing that it was still in its early beta phase and “not a traditional editorial podcast.” She also teased future updates that would allow listeners to interact with the podcast, asking follow-up questions and digging deeper into stories.
But the technical wizardry wasn’t enough to overcome the fundamental problems. As Semafor and other outlets reported, the AI’s errors weren’t just minor slip-ups—they were the kind of mistakes that strike at the heart of journalistic integrity. Inventing quotes, misrepresenting the newspaper’s editorial stance, and failing to provide crucial context are all cardinal sins in the world of news. For a publication like The Washington Post, which has long prided itself on accuracy and standards, the missteps were especially painful.
The disconnect between the newsroom and the product team was palpable. While the product division viewed the errors as a natural part of rolling out an experimental feature, journalists saw them as an existential threat to their profession. “If we were serious, we would pull this tool immediately,” one editor argued. The sense of urgency was heightened by the broader political context: the rollout came just days after the White House had launched a site criticizing journalists, including those at the Post, for stories with corrections or editor’s notes attached.
Despite the backlash, the Post’s leadership maintained that the AI podcast was an experiment, not a replacement for traditional journalism. “We think [conventional podcasts] have a unique and enduring role, and that’s not going away at the Post,” Kattleman told Beritaja. Still, the move was part of a broader strategy under owner Jeff Bezos, who has encouraged the paper to embrace AI technologies—from article summaries to chatbots trained on Post content. Bezos himself outlined a vision for the paper’s future in a February email, stating, “We are going to be writing every day in support and defense of two pillars: personal liberties and free markets. We’ll cover other topics too of course, but viewpoints opposing those pillars will be left to be published by others.”
Industry experts see both promise and peril in the Post’s approach. Gabriel Soto, senior director of research at Edison Research, noted that AI podcasts are “cost-effective,” eliminating the need for studios, writers, editors, and even human hosts. If successful, such technology could allow media brands to scale up their audio offerings dramatically. Andrew Deck, who covers AI and media for Harvard’s Nieman Lab, pointed out that the Post is hardly alone—BBC’s My Club Daily and Swiss broadcasters have also experimented with AI-generated audio. Yet Deck cautioned that “generative AI models hallucinate,” often making confident but completely incorrect statements.
There are also broader concerns about what’s lost when the human touch is replaced by algorithms. Nicholas Quah, a podcast writer for Vulture and New York magazine, observed that “there are people who do this for a living” who can “produce higher quality versions of these recordings.” The risk isn’t just job loss, but the erosion of the unique voice and community that make podcasts so compelling in the first place. As Deck put it, “This kind of news content is a far cry from the disembodied banter of AI podcast hosts.”
The potential for echo chambers is another worry. AI-driven personalization tends to serve up what audiences want to hear, rather than challenging them with diverse viewpoints or skepticism—a role that journalists have traditionally played. “AI-based news personalization tends to land firmly in the camp of delivering audiences what they want to know,” Deck said. That could deepen existing divides and erode trust in news organizations, especially if listeners discover that the “voice” they’re hearing isn’t real—or worse, is spreading misinformation.
For now, The Washington Post says it’s watching closely to see how users respond. “It’s early, and it’s an experimental product in a lot of ways,” Kattleman told Digiday. The team will be “looking at habit-based metrics rather than volume in the early going.” But as the newsroom’s reaction shows, the stakes are high. The experiment may be about technology, but at its core, it’s a test of trust—between a storied institution and the people who rely on it for the truth.