In a striking show of unity, more than 800 public figures—including Apple co-founder Steve Wozniak, Prince Harry, and Meghan Markle—have joined a growing global movement demanding a ban on the development of artificial intelligence (AI) "superintelligence" until comprehensive safety measures and broad public support are in place. The call, formalized in a statement released by the nonprofit Future of Life Institute (FLI) on October 22, 2025, has since swelled into a groundswell, with over 28,000 individuals adding their names to an online petition, as reported by multiple outlets including The Financial Times and Nexstar Media.
The core message of the letter is both simple and urgent: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." This 30-word statement, while brief, encapsulates a broad spectrum of anxieties about AI’s rapid evolution and its potentially existential risks.
Superintelligence, as defined by Harry and Meghan’s Archewell Foundation, refers to "artificial intelligence capable of outperforming all humans at most cognitive tasks." According to the statement, leading AI experts now believe such systems could be less than a decade away—a timeline that has prompted alarm among technologists and public advocates alike. The signatories warn that, while AI innovation holds great promise for progress, a heedless rush toward superintelligence without robust safeguards could have "unthinkable consequences for humanity."
Prince Harry, who has previously championed online safety and stronger content moderation policies for children, underscored the movement’s ethos. "The future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance," he stated, as cited by Nexstar Media and echoed in the Archewell Foundation’s communications.
The letter’s signatories form a remarkably diverse coalition. Alongside Harry and Meghan are AI pioneers such as Geoffrey Hinton and Yoshua Bengio—co-winners of the Turing Award, computer science’s top honor—who have become outspoken critics of the very technologies they helped create. Hinton, who also won a Nobel Prize in physics last year, has repeatedly sounded the alarm about the dangers of unchecked AI development. The list also includes former Obama White House adviser Susan Rice, ex-Joint Chiefs of Staff Chair Mike Mullen, former White House strategist Steve Bannon, actor Joseph Gordon-Levitt, and British billionaire Richard Branson.
Stuart Russell, an influential AI researcher and professor at the University of California, Berkeley, sought to clarify the statement’s intent. "This is not a ban or even a moratorium in the usual sense. It's simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?" he wrote, as reported by Nexstar Media.
The letter’s release was strategically aimed at tech giants like Google, Meta, and OpenAI—companies locked in a high-stakes race to develop AI systems that could surpass human capabilities. The signatories argue that this competition is fueling a "race to the bottom," as described by Max Tegmark, president of the Future of Life Institute and a professor at MIT. "I think that's why it's so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in," Tegmark told Nexstar Media. He noted that the criticism of superintelligence has gone "very mainstream," a shift from previous years when such concerns were largely debated within the AI research community.
The letter’s preamble acknowledges that AI tools can bring health and prosperity, but it warns that the stated ambitions of many leading AI companies—to build superintelligence within the next decade—have triggered worries about economic obsolescence, loss of freedom, civil liberties, national security risks, and even "potential human extinction." These are not idle fears, say the organizers, but real risks that demand immediate and collective action.
Actor Joseph Gordon-Levitt, another signatory, voiced a sentiment shared by many: "Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc. But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don't want that." His comments, reported by Nexstar Media, highlight the complex trade-offs at stake as AI technologies become ever more entwined with daily life.
The petition’s reach extends well beyond celebrities and technologists. According to The Daily Report, more than 28,000 individuals—including hundreds of public figures and several prominent AI pioneers—have signed the petition since its launch. Anthony Aguirre, one of the organizers, discussed the campaign’s momentum and the broad base of support it has attracted.
This is not the first time the Future of Life Institute has tried to slow the breakneck pace of AI development. In March 2023, the group issued a letter urging tech companies to pause work on more powerful AI models, a call that was largely ignored by industry leaders. Notably, Elon Musk, who signed the 2023 letter, was at the same time starting his own AI venture to compete with the very firms he wanted to pause. When asked if Musk had been approached again, Tegmark replied that he had written to all major U.S. AI developers but did not expect them to join this time around.
The debate over superintelligence is further complicated by the tendency of some companies to hype their AI products’ capabilities, sometimes overstating their progress to attract investment or market share. For example, OpenAI recently faced skepticism from mathematicians and scientists after claims that ChatGPT had solved unsolved math problems—when, in fact, it had merely summarized existing online material. "There's a ton of stuff that's overhyped and you need to be careful as an investor, but that doesn't change the fact that—zooming out—AI has gone much faster in the last four years than most people predicted," Tegmark commented.
Despite the growing chorus of concern, the tech giants targeted by the letter—Google, Meta, OpenAI, and Musk’s xAI—have not publicly responded to the latest call for a ban. The silence underscores the high stakes and deep divisions within the tech industry and policy circles over how best to manage AI’s rapid ascent.
As the petition’s signatures climb and the debate intensifies, one thing is clear: the future of AI—and the question of whether humanity can steer it safely—has become a defining challenge of our time.