In a development that has sent shockwaves through the creator community, YouTube has confirmed it has been quietly using artificial intelligence (AI) to alter videos uploaded to its Shorts platform—without informing or seeking consent from the creators themselves. The revelation, made public on August 25, 2025, has sparked heated debate about transparency, authenticity, and the future of digital content, as reported by BBC, The Atlantic, and other major outlets.
The controversy first bubbled up when popular music YouTubers Rick Beato and Rhett Shull noticed subtle but unsettling changes in their recent uploads. "I was like 'man, my hair looks strange'," Beato told BBC. "And the closer I looked it almost seemed like I was wearing makeup." For Shull, the difference was even more jarring. He described the processed videos as looking "smoothened" and having an "oil painting effect" on his face—effects he had not applied himself. Shull’s video exposing the issue quickly went viral, amassing over 600,000 views and igniting a wider conversation among creators and fans alike.
What exactly was happening to these videos? According to YouTube, the company was running "an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)," as stated by Rene Ritchie, YouTube’s head of editorial and creator liaison, in a post on X. The changes, though often subtle—sharper wrinkles in shirts, smoother or more textured skin, and occasionally warped ears—were enough to give the content an artificial, AI-generated sheen. For creators whose livelihoods depend on authenticity and trust, this was no small matter.
"If I wanted this terrible over-sharpening I would have done it myself. But the bigger thing is it looks AI-generated. I think that deeply misrepresents me and what I do and my voice on the internet. It could potentially erode the trust I have with my audience in a small way. It just bothers me," Shull lamented, as reported by BBC and The Atlantic. Other creators echoed similar concerns, with many taking to social media as early as June to post closeups of altered body parts and question YouTube’s intentions.
Transparency—or the lack thereof—has become the central issue. Unlike AI features on smartphones, which users can typically opt into or disable, YouTube’s enhancements were applied automatically, with no notification or choice given to creators. As Samuel Wooley, the Dietrich chair of disinformation studies at the University of Pittsburgh, explained to BBC, "You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos."
YouTube’s attempt to draw a line between "traditional machine learning" and generative AI has done little to quell the criticism. Ritchie emphasized that the experiment did not use generative AI, which creates entirely new content, but rather machine learning to enhance clarity. However, experts like Wooley argue that this distinction is largely academic in this context. The end result is the same: content is being modified by algorithms in ways that may not be immediately obvious to viewers or even to the creators themselves.
The implications go far beyond a few quirky videos. As Jill Walker Rettberg, a professor at the Centre for Digital Narrative at the University of Bergen, put it, "Footsteps in the sand are a great analogy. You know someone made those footprints. With an analogue camera, you know something was in front of the camera because the film was exposed to light. But with algorithms and AI, what does this do to our relationship with reality?"
This isn’t the first time technology has blurred the line between reality and representation. Decades ago, the arrival of Photoshop sparked concerns about manipulated images, while more recently, beauty filters and airbrushing on social media have raised questions about authenticity. But as Wooley noted, "AI puts these trends on steroids." The scale and subtlety of AI-driven changes make them harder to detect and, potentially, more damaging to trust.
Indeed, trust is already in short supply. According to research cited by The Atlantic, public confidence in mass media has plummeted from 72% in the 1970s to just 34% in 2023. The knowledge that platforms might be altering content behind the scenes—especially without disclosure—risks undermining what little trust remains between creators, platforms, and audiences. "If the audience thinks we’re editing ourselves or altering how we look without telling them, that erodes trust," one creator emphasized in reporting by The Atlantic.
Some observers warn that YouTube’s experiment could set a dangerous precedent, shifting power away from creators and toward platforms. By controlling how videos are processed and displayed, YouTube and similar companies gain unprecedented influence over how creators are represented and perceived. This is not a theoretical concern: Google’s own Video Intelligence API and other technologies demonstrate the capacity for large-scale automated video analysis and modification. If rolled out more broadly, such tools could fundamentally alter the dynamics of creative ownership and audience perception.
Meanwhile, YouTube is not alone in its embrace of AI-driven content modification. Other platforms are conducting similar experiments. Meta is developing AI chatbots for Facebook and Instagram, TikTok has launched a section for creating videos using AI, and Snapchat offers tools for generating images based on selfies. Even Google’s Pixel 10 smartphone now features generative AI in its camera, allowing users to zoom up to 100x and select the best facial expressions from a series of photos—essentially creating a moment that never actually happened. To address some of these concerns, Google is implementing digital watermarks, known as content credentials, to indicate when images have been edited using AI.
Despite the backlash, not all creators are up in arms. "You know, YouTube is constantly working on new tools and experimenting with stuff," Beato told BBC. "They’re a best-in-class company, I’ve got nothing but good things to say. YouTube changed my life." Still, for many, the issue is not about resisting change, but about maintaining agency and transparency in a rapidly evolving digital landscape.
As platforms like YouTube continue to experiment with AI, the debate over who controls digital content—and how much audiences can trust what they see—shows no signs of fading. The outcome will shape not just the future of online video, but the very nature of our shared digital reality.