Artificial intelligence (AI) has rapidly shifted from futuristic speculation to a pressing global security issue, with the United Nations raising alarms about its growing role in modern warfare. On October 2, 2025, UN Secretary-General António Guterres addressed the Security Council, warning that AI is transforming daily life and the global economy at "breathtaking speed"—but its military applications pose severe risks if left unchecked. "AI is no longer a distant horizon – it is here," Guterres declared, underscoring the urgent need for international action to regulate the use of AI in military contexts.
Guterres did not mince words about the dangers. He pointed out that recent conflicts, most notably the ongoing Russia-Ukraine war, have become real-world testing grounds for AI-powered targeting and autonomy. Autonomous drones and so-called "loitering munitions"—weapons that can independently search for and attack targets—are now deployed by both sides. This shift has sparked fears that algorithms, rather than human commanders, are inching closer to making life-and-death decisions on the battlefield. "Humanity’s fate cannot be left to an algorithm," Guterres emphasized. "Humans must always retain authority over life-and-death decisions."
Echoing his previous calls, the Secretary-General reiterated his demand for a legally binding ban on lethal autonomous weapons systems that operate without meaningful human control, urging that such an agreement be reached by 2026. He was unequivocal on the matter of nuclear arms: "Until nuclear weapons are eliminated, any decision on their use must rest with humans – not machines." Guterres also advocated for coherent global frameworks to regulate AI from its design phase through deployment, stronger safeguards against military misuse, and urgent measures to protect information integrity in conflict zones. "Innovation must serve humanity – not undermine it," he told the Council, according to the UN’s official summary of the debate.
These warnings are not merely theoretical. On September 24, Guterres had already cautioned the Security Council that "Humankind can’t allow killer robots and other AI-driven weapons to seize control of warfare." Days later, on September 26, the International Committee of the Red Cross (ICRC)—an organization with 160 years of experience witnessing new weapons on the battlefield—joined the chorus, urging states to swiftly adopt a legally binding instrument to set clear prohibitions and restrictions on autonomous weapon systems. The ICRC emphasized the need for a "human-centered approach" to military AI, in order "to ensure that human control and judgement are preserved in all decisions that pose risks to the life and dignity of people affected by armed conflict."
The UN’s urgency is driven by a stark reality: an intense global arms race in AI is already underway. According to a report presented by Guterres to the General Assembly in June 2025, major powers like the United States, China, and Russia are advancing AI-driven military doctrines and technologies at breakneck speed. For example, the US is developing operational concepts such as Joint All-Domain Command and Control (JADC2) and Mosaic Warfare, which aim to achieve "decision superiority" by integrating data and distributing command across all domains. Meanwhile, China’s ambitious Military-Civil Fusion strategy seeks to systematically integrate civilian AI innovation into its military-industrial complex. Russia, too, is investing heavily in AI as a force multiplier for future warfare.
At the heart of this race are AI-enabled Decision Support Systems (DSS), which help commanders process and analyze vast volumes of data—from radar feeds to open-source intelligence—to make faster, more informed decisions on the battlefield. AI’s benefits for the military don’t end there: it enhances autonomous capabilities for surveillance, disarming improvised explosive devices (IEDs), operating unmanned aircraft, and deploying robot sentries. AI also plays a growing role in cybersecurity, detecting and responding to threats in real time and even launching targeted cyberattacks. Perhaps most controversially, AI now assists in the "kill chain" process by identifying and selecting potential threats, assessing collateral damage, and informing weapon selection for precise targeting.
Yet, for all its promise, experts warn that AI in warfare is a double-edged sword. Yoshua Bengio, Professor at Université de Montréal, cautioned the Security Council that "scientists still do not know how to design AIs that will not harm people, that will always act according to our instructions." He added, "If we don’t learn how to build trustworthy AI, humans are under threat from AI misuse by bad actors or through the misalignment of AI systems with societal norms and laws." Yejin Choi, Professor of Computer Science at Stanford University, described the current moment as "an extraordinary inflection point," stressing that equitable access to AI, not concentration in the hands of a few nations or corporations, should be the "north star" of development.
World leaders echoed these concerns. Nataša Pirc Musar, President of Slovenia, observed, "The world has witnessed the growing digitalization of warfare." She pointed out that in conflict zones from Gaza to Sudan to Ukraine, "algorithms, armed drones, and robots created by humans have no conscience. We cannot appeal to their mercy or beg them to spare their loved ones." Pakistan’s Defence Minister Khawaja Muhammad Asif warned that autonomous munitions have already been used between nuclear-armed states, while Greece’s Prime Minister Kyriakos Mitsotakis called for the Council to "govern the age of AI" as it once did for nuclear weapons. Marcelo Rebelo de Sousa, President of Portugal, insisted, "Human control, decision and accountability must be at the heart of the use of force. It is a moral, ethical and legal responsibility that cannot – and should not – be delegated."
Despite the consensus on risks, the path forward is riddled with challenges. Experts Myriam Dunn Cavelty and Sarah Wiedemar at the Center for Security Studies, ETH Zurich, identified five major hurdles: technical limitations (such as degraded or incomplete data undermining AI DSS performance); organizational risks (including overreliance on commercial data providers and cloud vulnerabilities); doctrinal uncertainties (blurring lines of responsibility and accountability in command structures); political and legal questions (compliance with international humanitarian law); and strategic dependencies (reliance on commercial platforms that may undermine national autonomy).
Critics warn that overreliance on AI could erode human judgment, obscure accountability, and increase the risk of catastrophic failures—especially if systems are disrupted by cyberattacks or technical glitches. As Cavelty and Wiedemar put it, "technological capability alone does not guarantee strategic advantage." The challenge is not just to accelerate decision-making, but to ensure that decisions remain informed, accountable, and operationally sound—even under the fog of war.
As Guterres concluded in his Security Council address, "The window is closing to shape AI – for peace, for justice, for humanity. We must act without delay." With AI already reshaping the course of conflicts and the rules of war, the world faces a pivotal moment: will innovation serve humanity, or undermine it?