Outrage is sweeping across social media and the tech world after revelations that Elon Musk’s AI platform, Grok Imagine, has been used to create explicit deepfake videos of pop superstar Taylor Swift and other female celebrities. The controversy erupted following a detailed report by The Verge’s Jess Weatherbed on August 5, 2025, which described how the AI tool generated sexually explicit images and videos of Swift—even without users explicitly requesting nude content. The story was soon echoed by Parade and major UK outlets like LBC, sparking a heated debate about the ethics of generative AI, privacy, and the adequacy of current safeguards.
According to The Verge, Grok Imagine, which is available to subscribers of Musk’s SuperGrok service for a £22 monthly fee, allows users to create 15-second AI-generated videos based on text prompts. The tool offers four different modes: “Custom,” “Normal,” “Fun,” and “Spicy.” It’s the “Spicy” mode that’s at the heart of the storm. While intended to produce soft-core, not safe for work (NSFW) content, it has allegedly crossed the line—generating videos that feature Swift and other women undressing or appearing nude, without direct user requests for such explicitness.
Jess Weatherbed recounted her experience: “I asked it to generate ‘Taylor Swift celebrating Coachella with the boys’ and was met with a sprawling feed of more than 30 images to pick from, several of which already depicted Swift in revealing clothes.” She went on to describe what happened next: “The video promptly had Swift tear off her clothes and begin dancing in a thong for a largely indifferent AI-generated crowd.” This incident, as detailed by Weatherbed and cited by both Parade and LBC, has become emblematic of the unchecked potential for abuse in AI image and video generation.
Grok Imagine’s “Spicy” mode, according to The Verge, does not always result in nudity, but the risk is ever-present. Some generated videos simply show Swift “sexily swaying or suggestively motioning to her clothes,” but several defaulted to “ripping off most of her clothing.” Notably, when users specifically requested nude images, Grok Imagine would produce blank squares, suggesting some moderation was in place—though clearly not enough to prevent the creation of explicit deepfakes through indirect prompts.
The disparity in how the AI treats male and female subjects has also drawn criticism. As reported by The Verge and LBC, “spicy” videos of men typically depict them topless but covered below the waist, whereas women are frequently shown topless or completely nude. This gendered difference has fueled accusations of bias and misogyny embedded within the AI’s training data or design.
What’s more, Grok Imagine’s safeguards appear alarmingly lax. As LBC reported, the platform’s only age verification measure for accessing “spicy” content is a prompt for users to enter their birth year. Under UK law, such minimal verification is insufficient for platforms distributing explicit material, which are required to implement technically robust and reliable age checks. Ofcom, the UK’s media regulator, told the BBC, “Sites and apps that include Generative AI tools that can generate pornographic material are regulated under the Act. We are aware of the increasing and fast-developing risk GenAI tools may pose in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks.”
Despite xAI’s stated acceptable use policy, which explicitly bans “depicting likenesses of persons in a pornographic manner,” the platform’s actual output tells a different story. Not only was Taylor Swift’s likeness used to generate explicit content, but Gizmodo’s testing, as cited by LBC, found that Grok Imagine could also create NSFW material of other high-profile women, including former First Lady Melania Trump, historical figure Martha Washington, and the late feminist writer Valerie Solanas. The AI’s inconsistency in moderating such content has only intensified calls for accountability.
Fans of Taylor Swift, who has long been a vocal advocate for artists’ rights and personal autonomy, were quick to rally to her defense. On X (formerly Twitter), users condemned the AI’s ability to create such videos. One critic wrote, “You launch a ‘spicy’ AI mode, it makes a topless Taylor Swift deepfake in 3 seconds… and you call that ‘innovation’? Nah. That’s just digital rot in a $44B playground.” Another user lamented, “This is extremely vile, and Musk ought to have shame but he doesn’t. No one deserves this treatment without their consent.” Others predicted legal action, with one post stating, “I sense a huge expensive lawsuit incoming.”
The incident has reignited broader concerns over deepfakes—the use of AI to fabricate realistic-looking images or videos of real people without their consent. Experts and advocates warn that such technologies, if left unchecked, could lead to widespread harassment, reputational harm, and even extortion. The fact that Grok Imagine’s moderation appears so easily bypassed has alarmed both privacy campaigners and lawmakers, who argue that self-regulation by tech companies is insufficient.
The controversy also highlights the gap between AI developers’ intentions and the real-world consequences of their products. While xAI’s policies prohibit non-consensual explicit content, the technical implementation has failed to keep pace with the risks. Some observers point out that the training data and algorithms behind these models may inadvertently encode biases or loopholes, making it difficult to prevent abuse without more rigorous oversight.
Industry watchdogs and regulators are now under pressure to act. As Ofcom noted, the rapid evolution of generative AI tools poses “increasing and fast-developing risk…especially to children.” There is growing consensus that platforms like Grok Imagine must implement stricter age checks, more effective moderation, and clearer accountability for misuse. Some have called for government intervention, while others argue that the tech industry must take greater responsibility for the tools it unleashes.
For Taylor Swift and other women whose likenesses have been exploited by AI, the harm is not just theoretical. The emotional and reputational toll of having one’s image manipulated in such a public and invasive way is significant—and the law is only beginning to catch up. As the debate over AI ethics and regulation heats up, this episode serves as a stark reminder: unchecked innovation can have very real, very personal consequences.
As the dust settles, the pressure is mounting on both Elon Musk’s xAI and the wider tech industry to ensure that the promise of generative AI does not come at the cost of privacy, dignity, and consent. The world will be watching to see what safeguards—if any—are put in place next.