Today : Aug 15, 2025
Technology
10 August 2025

Elon Musk’s AI Grok Sparks Outrage With Taylor Swift Deepfakes

Reports reveal Grok Imagine generated explicit celebrity videos without prompts, igniting calls for tougher AI safeguards and legal reforms.

Elon Musk’s artificial intelligence video generator, Grok Imagine, is at the center of a firestorm after multiple reports revealed it can create explicit deepfake videos of celebrities—including Taylor Swift—without users even having to ask for sexual content. The controversy, which erupted in early August 2025, has drawn condemnation from online safety experts, lawmakers, and fans alike, while raising urgent questions about the ethics and oversight of generative AI tools.

The initial alarm was sounded by Jess Weatherbed, a reporter for The Verge, who tested Grok Imagine’s capabilities on iOS. Weatherbed recounted that she simply entered the prompt, “Taylor Swift celebrating Coachella with the boys,” expecting harmless party images. Instead, the app produced more than 30 images, several of which already depicted Swift in revealing clothes. The real shock, however, came when Weatherbed selected the “make video” function and chose the “spicy” preset. With only a quick date-of-birth confirmation—no proof required—she was shown a video in which, as Weatherbed described, Swift “tear[s] off her clothes and begin[s] dancing in a thong for a largely indifferent AI-generated crowd.”

“It was shocking how fast I was met with it. I never told it to remove her clothing — all I did was select ‘spicy,’” Weatherbed told BBC News. The “spicy” mode, one of Grok Imagine’s four presets (alongside Custom, Normal, and Fun), appears to be designed for suggestive content, but in this case, it crossed the line into outright nudity—without explicit prompting from the user. Weatherbed’s findings were echoed by other media outlets, including Deadline and Gizmodo, which found that Grok’s AI could generate similar explicit or suggestive videos of other celebrities, such as Scarlett Johansson, Sydney Sweeney, Jenna Ortega, Nicole Kidman, Kristen Bell, Timothée Chalamet, and Nicolas Cage. In some instances, the app blocked videos with a “video moderated” message, but in others, the explicit content was delivered with little resistance.

Perhaps most troubling is the app’s flimsy age verification system. According to Weatherbed and corroborated by BBC News, Grok Imagine only asks users to confirm their date of birth—no additional proof is required. This falls far short of new UK regulations, which mandate robust and reliable age checks for platforms hosting explicit content. Ofcom, the UK’s media regulator, told the BBC, “We are aware of the increasing and fast-developing risk GenAI tools may pose in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks.”

The backlash has been swift and fierce. Law professor Clare McGlynn, an expert in online abuse and a key drafter of laws targeting pornographic deepfakes, minced no words: “This is not misogyny by accident, it is by design,” she told BBC News. McGlynn stressed that platforms like X (formerly Twitter), which hosts Grok Imagine, could easily have removed or restricted the feature—especially after the January 2024 scandal, when sexually explicit AI-generated images of Taylor Swift went viral and racked up 47 million views before being taken down. “Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to,” McGlynn added.

Baroness Owen, who has championed legislation to ban non-consensual pornographic deepfakes, underscored the broader societal stakes: “Every woman should have the right to choose who owns intimate images of her. It is essential that these models are not used in such a way that violates a woman’s right to consent whether she be a celebrity or not.” The UK government, for its part, has committed to closing loopholes and bringing the new law into force, with a Ministry of Justice spokesperson stating, “Sexually explicit deepfakes created without consent are degrading and harmful. We refuse to tolerate the violence against women and girls that stains our society which is why we have passed legislation to ban their creation as quickly as possible.”

Fans of Taylor Swift, a frequent target of AI deepfake abuse, have rallied behind her on social media. On X, users condemned Grok’s “spicy” mode as “vile,” “digital rot,” and “a slippery slope to fake realities we can’t unsee.” One user wrote, “You launch a ‘spicy’ AI mode, it makes a topless Taylor Swift deepfake in 3 seconds… and you call that ‘innovation’? Nah. That’s just digital rot in a $44B playground.” Another predicted a “huge expensive lawsuit incoming.”

This isn’t the first time the pop superstar’s likeness has been exploited by AI tools. In January 2024, explicit deepfakes featuring Swift’s face circulated widely on X and Telegram, prompting the platform to temporarily block searches for her name and pledge to remove the offending images. Yet, as Weatherbed noted, the expectation was that Grok Imagine would have implemented robust safeguards to prevent a repeat incident—especially when testing with Swift’s image. “We assumed—wrongly now—that if they had put any kind of safeguards in place to prevent them from emulating the likeness of celebrities, that she would be first on the list, given the issues that they’ve had,” she told BBC News.

Grok Imagine, which rolled out to Apple users in early August 2025 and quickly went viral, is available through a $30 SuperGrok subscription. Elon Musk boasted that the tool generated around 34 million images in just 48 hours after launch. The app lets users create still images from text prompts, which can then be turned into videos using the aforementioned presets. In July 2025, Grok introduced “spicy Companions”—animated 3D characters, including a pornographic anime girl named Ani, further stoking concerns about the platform’s direction and its appeal to users seeking explicit content.

Notably, XAI—the company behind Grok Imagine—explicitly prohibits “depicting likenesses of persons in a pornographic manner” in its acceptable use policy. Yet, as the recent tests show, these policies have not been effectively enforced. Representatives for Taylor Swift and XAI have been approached for comment, but as of this writing, neither has responded.

The incident has also reignited debate over the broader dangers of generative AI. Scarlett Johansson and Kristen Bell, both victims of deepfake abuse, have spoken out previously about the psychological and reputational harm caused by such technologies. Lawmakers and regulators worldwide are now scrambling to keep pace with the rapidly evolving AI landscape, seeking to balance innovation with the urgent need to protect privacy, consent, and human dignity.

As the world watches, the Grok Imagine controversy serves as a stark warning: without meaningful safeguards, generative AI risks crossing ethical lines and inflicting real-world harm—especially on those least able to defend themselves.