On October 16, 2025, OpenAI, the company behind ChatGPT, announced it had halted the creation of artificial intelligence-generated videos featuring Martin Luther King Jr. on its Sora app. The move came after the Estate of Martin Luther King, Jr., Inc. lodged complaints about what they described as "disrespectful depictions" of the iconic civil rights leader. According to a joint statement released late Thursday by OpenAI and King’s estate, the company has blocked AI videos portraying King while it works to "strengthen guardrails for historical figures."
Since Sora’s launch at the end of September 2025, the app has rocketed to over 1 million downloads in less than five days—a milestone that, as CNBC reported, was reached even faster than ChatGPT itself. The tool, which allows users to generate short, hyper-realistic videos from text prompts, quickly became a sensation. But with that popularity came a wave of controversy. In just three weeks, deeply troubling deepfake videos of King began circulating widely on social media. NPR detailed how some of these AI-generated clips depicted King in scenarios that were not only offensive but outright racist—showing him stealing from a grocery store, fleeing police, or perpetuating crude stereotypes. The King's estate called these portrayals "disrespectful," and the public outcry was swift.
OpenAI’s initial approach with Sora was to let users create deepfakes of both living and deceased public figures, including celebrities and historical icons, without explicit consent. Users joining the app were instructed to record themselves from different angles and speak on camera, creating what Sora calls a "cameo." This feature gave individuals some control over whether others could make deepfake videos of them. However, this safeguard did not extend to historical figures, leaving the door wide open for abuse.
The company’s decision to pause the generation of Martin Luther King Jr. videos reflects a broader debate about the ethics of AI and digital likeness. OpenAI said in its statement on the social platform X, "While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used." The company also committed to toughening its "guardrails" for depictions of historical figures and now allows public figures or their representatives to request exclusion from Sora’s AI-generated videos.
As the controversy swelled, Bernice King, daughter of the late Dr. King, took to X to voice her support for the ban. "Please stop," she wrote, echoing the sentiments of many who saw the AI videos as not just disrespectful, but potentially damaging to her father’s legacy. Zelda Williams, daughter of the late comedian Robin Williams, faced a similar onslaught of AI-generated videos of her father and pleaded on Instagram, "Please, just stop sending me AI videos of my dad... it’s NOT what he’d want." These personal appeals highlighted the emotional toll such technology can inflict on the families of the deceased.
The uproar over Sora’s capabilities has not been limited to families and estates. Hollywood studios and talent agencies have also raised the alarm. According to NPR, many in the entertainment industry were troubled that OpenAI launched Sora without first seeking consent from copyright holders—a move reminiscent of the company’s earlier rollout of ChatGPT, which was trained on vast amounts of copyrighted material without initial approval or compensation. This approach has already sparked a series of copyright lawsuits, and Sora’s release threatens to add fuel to the fire.
The legal landscape surrounding posthumous rights to likeness is complex and varies by jurisdiction. In California, for example, heirs or estates of public figures retain the rights to their likeness for 70 years after death. This means that, despite the passage of time, families and estates are legally empowered to control how the images and voices of their loved ones are used. OpenAI CEO Sam Altman responded to these concerns by announcing changes to Sora’s policy: now, rights holders must opt in for their likenesses to be depicted by AI, rather than having such portrayals allowed by default.
But the genie may already be out of the bottle. The rapid spread of AI-generated videos—sometimes called "AI slop" due to their often low-quality, mass-produced nature—has raised broader concerns about misinformation, copyright infringement, and the erosion of public trust. As CNBC noted, the rise of tools like Sora has made it easier than ever for anyone to create convincing fake videos, which can be used to misinform, defame, or simply flood social media with content of dubious value. Disinformation researchers and intellectual property lawyers have sounded the alarm, warning that the "shoot-first, aim-later" approach to safety guardrails could have serious consequences.
OpenAI has acknowledged these risks, stating its commitment to improving the technology’s safeguards and giving more control to individuals and estates. "We will work to toughen guardrails for historical figures," the company said, and clarified that public figures or their representatives can now request that their likenesses not appear in Sora videos. Still, critics argue that these measures are reactive rather than proactive, and that the company’s initial rollout failed to anticipate the potential for harm.
The debate over AI-generated content is far from settled. On one hand, there are legitimate free speech interests in depicting historical figures and exploring alternative histories or creative interpretations. On the other, there is a clear need to respect the dignity of real people—especially those whose legacies are still deeply felt—and to protect against the misuse of their images for profit, ridicule, or deception.
As the dust settles from this latest controversy, it’s clear that the intersection of technology, ethics, and law will continue to challenge both innovators and society at large. For now, OpenAI’s decision to halt AI depictions of Martin Luther King Jr. stands as a reminder that even in a digital age, the legacies of the past still demand respect—and that the guardians of those legacies are not ready to cede control to algorithms just yet.