Today : Nov 17, 2025
Technology
22 October 2025

Bryan Cranston Spurs OpenAI Policy Shift After Sora 2 Deepfake Scandal

After unauthorized AI videos used his likeness, the Breaking Bad star helped lead a Hollywood push for stronger protections, prompting OpenAI to adopt new consent rules and back federal legislation.

When OpenAI launched its much-anticipated generative video platform, Sora 2, on September 30, 2025, it was meant to herald a new era for creative technology. Instead, it immediately set off alarm bells across Hollywood. Within days, viral videos began circulating online, featuring the unmistakable face and voice of Bryan Cranston—best known for his role as Walter White in Breaking Bad—appearing in bizarre, unauthorized scenarios. One video showed Cranston’s likeness alongside a synthetic Michael Jackson, while another placed him in a Vietnam War parody with his former co-stars. The lifelike accuracy of these deepfakes left many, including Cranston himself, unsettled and concerned about the implications for performers everywhere.

According to Deadline and The Guardian, Cranston wasted no time in contacting his union, SAG-AFTRA, to voice his distress. “I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way,” Cranston stated through the union. His unease was echoed by industry peers and reinforced by major talent agencies such as Creative Artists Agency (CAA) and United Talent Agency (UTA), both of which represent many of the world’s most recognizable artists. These agencies, as reported by LA Times, were among the first to alert their clients to the risks Sora 2 posed, especially as videos featuring copyrighted characters from franchises like Star Wars and Pokémon began cropping up online.

Initially, confusion reigned over OpenAI’s consent policies. Some reports suggested that the company had told agencies and studios that, to avoid having their clients or copyrighted material replicated, they would need to opt out—rather than opt in. OpenAI disputed this, maintaining that its intention was always to require explicit opt-in consent for the use of a person’s likeness or voice. “While from the start it was OpenAI’s policy to require opt-in for the use of voice and likeness, OpenAI expressed regret for these unintentional generations. OpenAI has strengthened guardrails around replication of voice and likeness when individuals do not opt-in,” stated a joint release from SAG-AFTRA, OpenAI, CAA, UTA, and the Association of Talent Agents, as reported by Deadline.

By early October, the uproar had reached a fever pitch. Cranston’s agency released a pointed statement questioning whether OpenAI and its partners respected the rights of creators or were simply “disregarding global copyright principles and blatantly dismissing creators’ rights.” The Motion Picture Association called for OpenAI to ban the use of copyrighted material to program Sora 2. The sense of unease was palpable: if a tool like Sora 2 could so easily repurpose the work and identity of actors, musicians, and artists, what would become of their livelihoods and control over their own images?

OpenAI CEO Sam Altman, feeling the mounting pressure, responded publicly. In a blog post dated October 3, Altman acknowledged the concerns of rightsholders and promised more granular controls for character generation, similar to the opt-in model for likeness, but with additional safeguards. “We are going to try sharing some of this revenue with rightsholders who want their characters generated by users,” Altman wrote, as cited by The Wrap. He also committed to meeting with SAG-AFTRA and other stakeholders to discuss further protections.

Sean Astin, the newly elected president of SAG-AFTRA, praised Cranston’s proactive stance. “Bryan did the right thing by communicating with his union and his professional representatives to have the matter addressed. This particular case has a positive resolution. I’m glad that OpenAI has committed to using an opt-in protocol, where all artists have the ability to choose whether they wish to participate in the exploitation of their voice and likeness using AI,” Astin said, according to The Guardian. Astin also emphasized that Cranston was just one of countless performers at risk of “massive misappropriation by replication technology.”

OpenAI’s efforts to address these issues were not limited to living performers. The company announced it would allow representatives of recently deceased public figures to request that their likeness be blocked from Sora 2, a move prompted by requests from the estate of Martin Luther King Jr. and advocacy from families of other late celebrities. The platform’s policy, as clarified by OpenAI, was to permit depictions of broadly defined historical figures—essentially, anyone both famous and dead—but to strengthen guardrails for these cases as well.

Despite these advances, the broader impact of AI on the entertainment industry remains a source of anxiety. According to reporting by The Wrap, Hollywood has lost more than 200,000 jobs to artificial intelligence technologies, a number that continues to climb. Studios have, for the most part, stayed conspicuously silent, perhaps out of a sense of powerlessness in the face of such rapid change. Meanwhile, OpenAI’s announcement that it would dedicate one million CPUs to Sora 2’s video generation capabilities only underscored the scale and potential reach of the technology.

The legal landscape is also evolving. The NO FAKES Act, currently under consideration in Congress, would require explicit consent for the creation and distribution of AI-generated replicas of any individual. OpenAI has publicly supported the bill, with Altman affirming the company’s “deep commitment to protecting performers from the misappropriation of their voice and likeness.” As Astin put it, “Simply put, opt-in protocols are the only way to do business and the NO FAKES Act will make us safer.”

The debate has even touched on the emotional toll for families of deceased celebrities. Zelda Williams, daughter of Robin Williams, and Kelly Carlin, daughter of George Carlin, have both spoken out against AI-generated videos of their fathers, describing them as “overwhelming, and depressing.” Legal experts cited by LA Times speculate that generative AI platforms may be using historical figures to test the boundaries of what’s permissible under current law.

Through it all, the case of Bryan Cranston and Sora 2 has become a touchstone for the broader conversation about digital rights, creative control, and the responsibilities of technology companies. Thanks to swift action by performers, unions, and agencies, OpenAI has taken meaningful steps to address the most egregious risks. But as the technology continues to evolve, the entertainment industry—and society at large—will need to remain vigilant, ensuring that innovation does not come at the expense of individual rights and artistic integrity.

For now, Bryan Cranston’s gratitude signals a rare, if tentative, victory for artists in the fast-changing world of artificial intelligence. But the story of Sora 2 is far from over, and the next chapter will likely be written not just by technologists, but by lawmakers, advocates, and the performers whose images have become the currency of the digital age.