Today : Oct 19, 2025
Technology
18 October 2025

OpenAI Halts Sora App Videos Of Martin Luther King Jr

After families protest offensive deepfakes, OpenAI blocks AI-generated videos of the civil rights leader and pledges tighter controls for historical figures.

The digital age has ushered in a new era of creativity and controversy, as artificial intelligence blurs the lines between reality and fabrication. This tension erupted into the public eye on October 16, 2025, when OpenAI—the powerhouse behind ChatGPT—announced it was blocking users from generating AI videos of the late civil rights icon Martin Luther King Jr. via its Sora app. The move, made in direct response to a request from King’s estate, has sparked an urgent debate over free speech, digital ethics, and the rights of public figures—living and deceased—in the era of deepfakes.

OpenAI’s decision didn’t come out of the blue. In recent weeks, the company had faced mounting criticism after users exploited its newly launched Sora 2 platform to create hyper-realistic, and sometimes deeply offensive, videos featuring not only King but a host of other deceased celebrities. According to The Washington Post and NPR, these AI-generated clips ranged from lighthearted tributes to deeply disrespectful fabrications—some showing King making monkey noises during his iconic “I Have a Dream” speech, others depicting him stealing from a grocery store or fleeing police. It was a digital free-for-all, and the backlash was swift.

Bernice King, the youngest child of Martin Luther King Jr. and CEO of the King Center, was among the first to speak out. After Zelda Williams, daughter of the late actor Robin Williams, posted a heartfelt plea on Instagram—"Please, just stop sending me AI videos of Dad. It’s NOT what he’d want"—Bernice King echoed the sentiment publicly: “I concur concerning my father. Please stop.” These appeals from the families of public figures underscored the emotional toll of seeing loved ones’ legacies manipulated by strangers for entertainment or, worse, to spread misinformation.

OpenAI responded by pausing the ability to create AI videos of King in Sora, stating in a joint statement with King Estate Inc., “While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used.” The company further clarified that authorized representatives or estate owners of other historical figures can now request that their likeness not be used in Sora videos. This marks a significant shift in policy, as previously, such depictions were allowed by default unless a complaint was lodged.

The controversy highlights the broader ethical minefield that AI-generated content presents. Sora, which launched at the end of September 2025 and quickly soared to the top of Apple’s App Store charts, allows users to create AI-driven videos from text prompts. The app’s “cameo” feature lets people upload videos of themselves to generate digital doubles, but it also enabled the creation of deepfakes featuring celebrities and historical figures—often without consent. As NPR reported, users gleefully produced clips of Amy Winehouse hosting a cooking show, Michael Jackson working at Walmart, and even fictional conversations between James Gandolfini and Robin Williams.

It didn’t take long for Sora to become a lightning rod for criticism. “The AI industry seems to move really quickly, and first-to-market appears to be the currency of the day (certainly over a contemplative, ethics-minded approach),” Kristelia García, an intellectual property law professor at Georgetown Law, told NPR. She noted that right-to-publicity and defamation laws vary by state and may not always apply to deepfakes, meaning there’s often “little legal downside to just letting things ride unless and until someone complains.” In California, for instance, heirs to a public figure own the rights to their likeness for 70 years after death, but enforcement remains murky in digital spaces.

OpenAI CEO Sam Altman has acknowledged the risks, expressing “trepidation” about launching Sora and its social media features. In a post on X, Altman wrote, “Social media has had some good effects on the world, but it’s also had some bad ones. We are aware of how addictive a service like this could become, and we can imagine many ways it could be used for bullying.” He later announced that OpenAI would move from an opt-out to an opt-in model for copyright holders, giving rightsholders “more granular control over generation of characters.” This policy change aims to address not just the misuse of deceased celebrities’ images, but also the unauthorized use of copyrighted fictional characters—another source of controversy for Sora, as videos featuring SpongeBob SquarePants and Mario quickly made the rounds online.

The King Center, established in 1968 by Coretta Scott King and now a global resource for nonviolent social change, has welcomed OpenAI’s willingness to engage in dialogue. Nearly a million people visit the Atlanta-based center each year to learn about Dr. King’s legacy. In its statement, OpenAI thanked Bernice A. King and John Hope Bryant, as well as the AI Ethics Council, “for creating space for conversations like this.” The company’s new stance gives estates and families a measure of control that had previously been lacking in the AI space.

Yet, the debate is far from settled. Hollywood studios and talent agencies have also voiced concerns, with the Motion Picture Association calling on OpenAI to take “immediate action” to fix its copyright opt-out system and police copyright infringement more proactively. OpenAI’s approach—often described as “asking forgiveness, not permission”—has already prompted a wave of lawsuits over the use of copyrighted material in AI training and outputs. Legal experts warn that as AI-generated content becomes more realistic and accessible, the potential for disinformation, defamation, and emotional harm grows exponentially.

More broadly, Sora’s rapid rise signals a shift in OpenAI’s business strategy. After years of focusing on research applications, the company is now building a social network powered by generative AI. The hope is to attract a massive user base and explore new revenue streams, as OpenAI remains unprofitable despite its technological breakthroughs. But with great power comes great responsibility, and the company’s handling of the King controversy will likely shape public trust in AI platforms for years to come.

For now, OpenAI’s move to block AI-generated videos of Martin Luther King Jr. stands as a pivotal moment in the evolving relationship between technology, creativity, and respect for historical legacy. As the digital frontier continues to expand, the world will be watching how tech giants, lawmakers, and the families of public figures navigate the ever-shifting boundaries of identity, memory, and meaning in the age of artificial intelligence.