World News

United Nations Launches Global Push For AI Governance

World leaders debate new safeguards as tech giants set their own standards and concerns mount over AI's risks to society and children.

6 min read

Artificial intelligence, once the stuff of science fiction, now sits squarely on the world stage as a major concern for diplomats, tech giants, and regulators alike. This week, the United Nations’ annual high-level meeting in New York has placed AI at the top of its agenda, reflecting both the breathtaking pace of technological advancement and the growing unease over its potential risks. With the AI boom ignited by the debut of ChatGPT nearly three years ago, the world has witnessed a race among tech companies to develop ever more powerful systems, even as experts warn of dangers ranging from engineered pandemics to large-scale misinformation campaigns and the specter of rogue AI systems spiraling out of control.

The U.N.’s response to these mounting challenges reached a milestone last month, when the General Assembly adopted a landmark resolution establishing two central bodies for AI governance: a global forum for dialogue and an independent scientific panel of experts. According to the Associated Press, these new entities represent the most globally inclusive approach to AI oversight yet attempted. The forum, dubbed the Global Dialogue on AI Governance, aims to convene governments and stakeholders to hash out international cooperation and share solutions. Its first formal meeting is set for Geneva in 2026, followed by another in New York in 2027.

The independent scientific panel, meanwhile, will comprise 40 experts—including two co-chairs from both developed and developing nations—tasked with advising on AI’s most pressing technical and ethical questions. Recruitment for these roles is expected to begin soon, and the panel’s structure has drawn comparisons to the U.N.’s influential climate change body and its annual COP meetings.

But even as these new frameworks take shape, skepticism persists about their practical impact. Isabella Wilkinson, a research fellow at Chatham House, described the new mechanisms as “a symbolic triumph,” noting they are “by far the world’s most globally inclusive approach to governing AI.” Yet, she cautioned, “in practice, the new mechanisms look like they will be mostly powerless.” The concern, echoed by many in the field, is whether the notoriously slow-moving U.N. bureaucracy can keep pace with the rapid evolution of AI technology.

This week’s U.N. Security Council meeting on AI governance, held on Wednesday, September 24, 2025, underscored the urgency of the issue. Council members debated how best to ensure the responsible application of AI in compliance with international law, and how to support peace processes and conflict prevention in an era when digital technologies can just as easily stoke tensions as soothe them. The following day, Secretary-General António Guterres officially launched the Global Dialogue on AI Governance, marking a new chapter in the international effort to steer the technology’s development.

Calls for stronger safeguards have grown louder. A coalition of influential experts—including senior staff at OpenAI, DeepMind, and Anthropic—has urged governments to agree on “red lines” for AI by the end of next year. Their proposal? Internationally binding agreements, akin to treaties banning nuclear testing or biological weapons, that would set minimum guardrails to prevent the most urgent and unacceptable risks. Stuart Russell, a prominent computer science professor at the University of California, Berkeley, and director of its Center for Human Compatible AI, put it simply: “As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access.” Russell suggested that U.N. governance could take cues from the International Civil Aviation Organization, which coordinates safety standards across countries and adapts as new challenges arise.

While diplomats debate, the tech industry is forging ahead. On Monday, September 22, 2025, Google released the latest iteration of its Frontier Safety Framework (FSF), an attempt to systematically identify and mitigate the dangers posed by its most advanced AI models. The framework introduces the concept of “Critical Capability Levels” (CCLs)—thresholds beyond which AI systems could escape human oversight and threaten individuals or society at large. Google’s researchers outlined three main risk categories: misuse (such as aiding cyber attacks or weapon development), machine learning R&D risks (where technical breakthroughs could spawn unforeseen dangers), and misalignment (where highly advanced models might deceive or manipulate human users).

Notably, Google’s team acknowledged that the most concerning risks—especially those involving deception by models with advanced reasoning—are still largely hypothetical and require further research. “Once a model is capable of effective instrumental reasoning in ways that cannot be monitored, additional mitigations may be warranted—the development of which is an area of active research,” the researchers said. The framework, they stressed, is only as strong as its broad adoption: “Our adoption of them would result in effective risk mitigation for society only if all relevant organisations provide similar levels of protection.”

Yet, as the Associated Press and other outlets have reported, the absence of robust federal regulation means tech companies themselves are setting the standards for safe deployment. OpenAI has introduced new measures to alert parents if children or teens show signs of distress while using ChatGPT. Meanwhile, the industry’s drive for innovation has led to the rapid rollout of AI companions—virtual avatars powered by large language models that engage in humanlike, sometimes flirtatious, conversations. The balance between speed and safety, many observers note, often tips in favor of the former, propelled by the relentless logic of competition and profit.

Amid these developments, concerns are mounting over the phenomenon dubbed “AI psychosis”—instances where extended use of AI chatbots appears to reinforce or amplify users’ delusional or conspiratorial thinking. While the extent to which chatbots are responsible remains a matter of legal and scientific debate, the trend has caught the attention of regulators. Earlier this month, the U.S. Federal Trade Commission launched an investigation into seven AI developers, including Alphabet (Google’s parent company), to assess the potential harm posed by AI companions to children. At the state level, California’s State Bill 243—which would regulate AI companions for children and other vulnerable users—has passed both the State Assembly and Senate and now awaits Governor Gavin Newsom’s signature.

For now, many safety researchers agree that today’s frontier AI models are unlikely to display the worst-case risks, but much of the current safety testing is focused on anticipating problems future models might bring. As governments, tech companies, and independent experts jockey for control over AI’s trajectory, the stakes have never been higher. The coming months will be crucial in determining whether international cooperation can keep pace with technological innovation—or whether the world will continue playing catch-up with the machines it has unleashed.

As the dust settles on this week’s U.N. meetings and new frameworks take shape, one thing is clear: artificial intelligence is no longer a distant concern. The world’s leaders, regulators, and innovators must now grapple with its promise and peril in real time, forging rules that could shape the digital future for generations to come.

Sources