Today : Nov 19, 2025
World News
19 November 2025

UK Policy Shift And US China Talks Reshape AI Safety

A sharp UK and US pivot away from broad AI safety leaves a leadership void just as US and Chinese experts meet in Hong Kong to forge unprecedented military AI restrictions.

As the world stands at the crossroads of unprecedented technological change, the international community is grappling with how best to govern the rapid rise of artificial intelligence (AI)—especially as its military and societal impacts become ever more profound. In the past two years, the United Kingdom has played a pivotal role in shaping the global conversation on AI safety, while the United States and China, the world’s two technological superpowers, edge closer to a landmark consensus on restricting AI’s use in defense. Yet, just as momentum for robust, comprehensive AI governance appeared to be building, a sharp policy shift by the UK and the US has introduced fresh uncertainty into the future of international AI safety standards.

Back in 2023, the UK took center stage by hosting the first-ever AI Safety Summit at Bletchley Park. According to Opinio Juris, this event produced the Bletchley Declaration, a political commitment signed by 29 states—including China—dedicated to international collaboration on AI safety risks. The summit series continued with follow-up gatherings in Seoul and Paris, with participating nations and stakeholders lauding the flexible, informal nature of these talks as a way to galvanize progress. The UK’s leadership didn’t stop there: it became a founding member of the Global Partnership on AI (GPAI) in 2020, an initiative designed to bridge the gap between AI theory and practice while upholding human rights and democratic values. The GPAI’s close relationship with the Organisation for Economic Co-operation and Development (OECD) soon evolved into a project-based partnership, further cementing the UK’s influence in the space.

Meanwhile, the G7 launched the Hiroshima AI Process—a policy framework for advanced AI systems—aligned with both the GPAI and the ongoing AI safety summits. The OECD, as reported by Opinio Juris, became the lynchpin for translating high-level principles into actionable best practices and national regulations. At the 2024 Seoul AI Safety Summit, sixteen AI companies publicly committed to identifying, assessing, and managing AI risks, submitting inaugural reports as part of a new international code of conduct overseen by the OECD.

The UK also played a crucial role in the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Leveraging the expertise of the Turing Institute, the UK helped develop the HUDERIA methodology for AI impact assessments, which the Council of Europe adopted and is now piloting across member states.

Perhaps the UK’s most ambitious contribution was championing the 2025 International Scientific AI Safety Report. This authoritative study, coordinated by 100 AI experts from 30 countries and chaired by Yoshua Bengio, established the first comprehensive, evidence-based consensus on AI safety risks, dividing them into three categories: malicious use (such as fake content and manipulation), malfunctions (like bias and loss of control), and systemic risks (including labor market disruption and privacy infringements).

According to Opinio Juris, the UK also pioneered a new model for institutional AI oversight: the AI Safety Institutes (AISIs). The UK’s AISI, established after the 2023 summit, became the world’s first state-backed body dedicated to evaluating advanced AI model safety. Over the next two years, seven more AISIs were set up in countries including the US, Japan, Singapore, South Korea, Canada, France, and India. These institutes serve as independent scientific evaluators, conducting rigorous testing of AI models—from chemical and biological risks to cyber vulnerabilities and societal impacts. The UK AISI, the best-resourced of the group, quickly established partnerships with leading AI labs and now leads the International Network of AI Safety Institutes, created in 2024 to accelerate global research and joint evaluations.

Yet, in a dramatic turn in 2025, the UK and US abruptly shifted their AI policy focus. Both countries rebranded their AI Safety Institutes to emphasize security—becoming the UK AI Security Institute and the US Center for AI Standards and Innovation (CAISI)—and narrowed their mandate to protecting AI systems from external threats, rather than addressing the full spectrum of societal and ethical risks. As Opinio Juris noted, this pivot was underscored by the UK and US refusing to sign the 2025 Paris Statement on Inclusive and Sustainable AI. Public-facing materials from the UK’s institute began to replace references to “societal impacts,” “unequal outcomes,” and “public accountability” with more ambiguous terms like “societal resilience” and “public safe and secure.”

This strategic realignment extended to policy directives: the UK government questioned the relevance of AISI staff researching issues like freedom of expression, bias, or discrimination, while the US National Institute of Standards and Technology instructed partners to avoid references to “AI safety,” “responsible AI,” and “AI fairness.” These changes, critics argue, undermine foundational principles enshrined in the OECD and UNESCO recommendations on AI ethics.

The ramifications are already being felt at the international level. The US has distanced itself from the joint testing exercises of the International Network of AISIs, while the UK now limits its participation to security-related aspects. As the Director of South Korea’s AISI pointedly remarked, “We are doing science – and safety risks don’t suddenly disappear.” The concern is that by sidelining research into broader societal risks, a whole range of harms to people and society may go unaddressed.

Amidst this policy turbulence, another historic development is unfolding. On November 19, 2025, experts from the US and China are set to meet in Hong Kong to hammer out a consensus on restricting the use of AI in the defense sector, as reported by TokenRing AI. While this is not a binding treaty, it marks a significant step in bilateral AI governance, building on intergovernmental talks initiated by Presidents Joe Biden and Xi Jinping in 2023 and subsequent high-level dialogues in 2024. The focus of the Hong Kong forum is clear: to ensure human control over critical military AI functions, with a mutual pledge that any AI-enabled weapons deployment must have affirmative human authorization. There’s also a push for a bilateral commitment not to use AI to interfere with each other’s nuclear command and control systems—an explicit technical safeguard to reduce the risk of catastrophic accidents.

The forum is expected to define “red lines” for military AI applications, with bioweapons highlighted as a key area for collaboration. This approach marks a departure from the unilateral US export controls of the past (such as the 2022 AI chip ban) toward more cooperative, mutually agreed guardrails. China’s advocacy for global AI cooperation, including its July 2025 proposal for an international AI organization, finds concrete expression in this bilateral platform.

The anticipated consensus has broad implications for tech giants and startups alike. US chipmakers, already constrained by export controls, face shrinking market share in China, while Chinese firms accelerate efforts toward algorithmic sovereignty and innovation under constraint. Both countries are racing to build self-sufficient AI ecosystems, with divergent approaches to hardware, software, and ethical standards. The agreement on human control could also spur the development of more explainable, auditable AI systems, influencing design principles across sectors.

Despite the promise of this expert consensus, significant challenges remain. The dual-use nature of AI, where civilian advances can be rapidly militarized, makes regulation exceptionally tricky. Deep-rooted mistrust and the absence of a comprehensive global framework hamper enforceability. As Tsinghua University’s Sun Chenghao observed, achieving binding agreements is “logically difficult,” but even expert-level consensus is a crucial first step.

Ultimately, the current moment represents both progress and uncertainty. The UK’s retreat from comprehensive AI safety leaves a leadership vacuum, even as the US and China inch toward pragmatic cooperation in the military domain. Whether the world can translate these early agreements into robust, enforceable global standards will define the trajectory of AI governance for years to come. For now, the urgent question remains: who will step up to fill the gap in international AI safety leadership?