OpenAI’s latest foray into the world of artificial intelligence, Sora 2, has rapidly become a flashpoint in the ongoing debate over deepfakes, digital identity, and the very nature of online reality. Launched on October 9, 2025, Sora 2 is a text-to-video AI generator app that thrusts deepfake technology into the mainstream, blending the creative, the absurd, and the unsettling in equal measure. Its TikTok-like interface and user-friendly design have already unleashed a flood of AI-generated content across social media, raising both eyebrows and urgent ethical questions.
At the heart of Sora 2’s appeal—and controversy—is its “cameos” feature. According to OpenAI, users can now insert themselves into any Sora-generated scene with "remarkable fidelity" after submitting a one-time video and audio recording to verify their identity and capture their likeness. The company boasts that this tool lets anyone become the star of their own AI-powered video, but the reality has proven far more complicated and, at times, troubling.
Internet personalities were among the first to embrace the chaos. Influencer Jake Paul, never one to shy away from the limelight, allowed his likeness to be used by others, quickly becoming the face of viral deepfake videos. Clips of Paul coming out of the closet or giving makeup tutorials began circulating widely, riffing on a previous deepfake of him kissing his upcoming UFC opponent, Gervonta Davis. Paul’s initial response was tongue-in-cheek: "This AI is getting out of hand," he intoned gravely, only to later lean into the joke by posting a campy TikTok video of his own on October 6, 2025.
But not everyone was amused. Paul’s girlfriend, Dutch speed skating champion Jutta Leerdam, appeared in a video to express her discomfort: "I don’t like it, it’s not funny!" she told him. "People believe it." Her concerns echo a growing unease about how quickly AI-generated content can blur the line between parody and deception, especially when the technology is so convincing that viewers struggle to distinguish fact from fabrication.
Indeed, the problematic undercurrents run deeper. As reported by Futurism, the Sora 2 phenomenon has exposed troubling possibilities for misuse. Not all faces appearing in Sora-generated videos have given their consent, and some users have found their likenesses used without permission. Journalist Taylor Lorenz, for example, revealed in a post that her stalker was using Sora to generate videos of her. "It is scary to think what AI is doing to feed my stalker’s delusions," she tweeted, highlighting the real-world risks of digital impersonation and the inadequacy of current safeguards.
The issue isn’t limited to living personalities. Families of deceased celebrities have voiced their own concerns about posthumous likeness rights. As OpenAI’s Sora 2 pushes the boundaries of AI video synthesis, the platform has sparked a heated debate about the ethics of reanimating the dead—sometimes for purposes far removed from the original person’s values or wishes. Zelda Williams, daughter of the late Robin Williams, was blunt in her criticism, writing, "Stop believing I wanna see it or that I’ll understand, I don’t and I won’t." She added, "AI is just badly recycling and regurgitating the past to be reconsumed. You are taking in the Human Centipede of content, and from the very, very end of the line, all while the folks at the front laugh and laugh, consume and consume."
OpenAI, for its part, has responded to the backlash by promising to refine its policies and implement more robust guardrails. The company moved to restrict the unauthorized use of copyrighted characters—such as those from SpongeBob SquarePants and South Park—after users flooded the app with clips featuring these familiar faces. While these measures were intended to address legal and ethical concerns, some users complained that the app had become "completely boring and useless" as a result.
Despite these challenges, Sora 2’s technical prowess is undeniable. According to industry analysis, the platform enables the effortless creation of hyper-realistic deepfake videos, a capability that has both thrilled and unsettled the public. Social media has become a battleground between those excited about the creative and educational uses of Sora 2 and those deeply apprehensive about its societal impacts. The technology’s ability to generate convincing fakes has experts warning about the erosion of media trustworthiness and the potential for widespread misinformation.
OpenAI has tried to strike a balance, touting Sora 2’s positive applications in entertainment and education while acknowledging the need for greater responsibility. The company has committed to ongoing policy refinement, aiming to address both the backlash from families of deceased celebrities and broader societal concerns about misinformation and unauthorized use of likenesses. As OpenAI stated in its announcement, the goal is to promote the platform’s creative potential "amidst backlash from deceased celebrities’ families and societal concerns about misinformation."
Industry experts have weighed in with predictions and analysis about the future of AI-driven video synthesis. Some see Sora 2 and similar platforms as harbingers of a new era in digital storytelling, where anyone can become the protagonist of their own narrative—real or imagined. Others warn that without proper oversight, these tools could undermine the very concept of truth online, making it increasingly difficult to trust what we see and hear.
For Jake Paul, the experience has been a mixed bag. On October 8, 2025, he posted a video expressing frustration with how AI-generated content was affecting his personal and professional life. "I’ve had it with the AI stuff," he said. "It’s affecting my relationships, businesses." Yet, even as he bemoaned the technology’s impact, Paul couldn’t resist the performative possibilities, applying foundation to his cheeks with a makeup brush as he delivered his message. "It’s really affecting things, and people really need to get a life," he added.
The Sora 2 saga has become a microcosm of the broader debate over AI’s place in society. On one hand, the platform offers unprecedented opportunities for creativity and self-expression. On the other, it raises urgent questions about consent, privacy, and the future of trust in digital media. As OpenAI continues to refine its policies and the public grapples with the implications, one thing is clear: the line between reality and artificiality is blurrier than ever, and the world is watching closely to see where we go from here.