On October 12, 2025, the debate over artificial intelligence (AI) and its role in warfare took center stage at the United Nations Security Council. Australia’s Minister for Foreign Affairs, Penny Wong, delivered a pointed address, highlighting both the promise and peril of AI as it rapidly transforms the global security landscape. While Wong acknowledged the remarkable benefits AI offers in fields like health and education, her speech zeroed in on the dangers of integrating AI into nuclear weapons systems and autonomous military platforms.
"Nuclear warfare has so far been constrained by human judgment. By leaders who bear responsibility and by human conscience. AI has no such concern, nor can it be held accountable. These weapons threaten to change war itself and they risk escalation without warning," Wong warned, according to The Conversation. Her remarks underscored a growing anxiety among world leaders: as AI technologies become more sophisticated and embedded in military operations, the traditional checks provided by human oversight and ethical judgment may be eroded.
AI’s reach in military contexts is already broad and, some would argue, unsettling. According to The Conversation, the Israel Defence Force (IDF) reportedly employs an AI system known as "Lavender" to identify suspected members of militant groups, including Hamas. This system, emblematic of AI-powered decision-support, sits at the heart of moral debates about life-and-death choices being delegated to algorithms. The use of such technology raises questions about the so-called "accountability gap"—the notion that when things go wrong, it’s unclear who, or what, should be held responsible.
Yet, as experts and officials point out, the accountability debate is not unique to AI. Legacy weapons, such as unguided missiles and landmines, also operate without human intervention during their most destructive moments, but rarely inspire the same scrutiny. As The Conversation notes, no one asks whether an unguided missile or landmine is "at fault" when tragedy strikes. The difference, perhaps, is the mystique and perceived autonomy of AI—a technology that, unlike a simple explosive, can analyze data, make predictions, and, in some cases, select targets.
Australia’s own experience with automated systems provides a cautionary tale. The Robodebt scandal, in which a flawed government-run automated debt recovery system caused widespread hardship, illustrated that the real failures lay not with the technology itself but with the humans who designed, implemented, and oversaw it. As The Conversation put it, "the Robodebt scandal in Australia saw misfeasance on behalf of the federal government, not the automated system it relied on to tally debts." This real-world example reinforces the argument that responsibility for AI’s actions, whether in civilian or military life, ultimately rests with people—not the algorithms they build.
AI’s integration into military operations is not limited to controversial targeting systems. The technology is also revolutionizing battlefield awareness and decision-making. On the same day as Wong’s address, defense technology company Leonardo DRS unveiled SAGEcore™, a ruggedized AI software platform engineered for real-time threat detection and decision support at the tactical edge. According to a company press release, SAGEcore uses AI and machine learning to rapidly fuse complex data from multiple sensors—such as radar and infrared—into a shared, real-time view of the battlespace. This enables warfighters to make faster, more informed decisions, even in the most challenging environments.
"What’s new isn’t just the tech—it’s also how we’ve engineered it to work together at the tactical edge," said John Baylouny, Chief Operating Officer at Leonardo DRS. "Our platform merges AI, sensor integration and ruggedized computing to sense—and make sense of—an increasingly complex battlespace. This launch reaffirms our mission to equip warfighters with real-time tools for decisive action—while creating enduring value for our partners and stakeholders."
SAGEcore is optimized for use on tactical vehicles, airborne platforms, maritime systems, and even space vehicles. Its capabilities include seamless integration of AI and machine learning algorithms, real-time execution on ruggedized GPUs, multi-sensor data fusion using open standards, high assurance encryption, and mission-critical communications redundancy. The platform supports a wide range of missions, from counter-unmanned aerial systems (Counter-UAS) and electronic warfare to autonomous operations—demonstrating just how deeply AI is being woven into the fabric of modern defense.
Leonardo DRS emphasizes rapid innovation and deployment, working with both traditional and non-traditional acquisition pathways to speed up the adoption of mission-ready AI. This approach, the company argues, ensures that advanced sensing, machine learning, and edge computing can be delivered as a cohesive stack, ready for the demands of today’s battlefields.
But as the technology advances, so too does the need for robust ethical and operational frameworks. According to The Conversation, all complex systems, including AI, exist across a lifecycle: from initial conception and design, through development and deployment, to eventual retirement. At each stage, humans make conscious decisions that shape the system’s capabilities, limitations, and safeguards. "What this lifecycle structure creates is a chain of responsibility with clear intervention points. This means, when an AI system is deployed, its characteristics—including its faults and limitations—are a product of cumulative human decision making," The Conversation writes. In other words, AI does not exist outside the chain of command or the web of human accountability.
This perspective challenges the popular narrative that AI is uniquely unaccountable. As experts point out, no inanimate object—whether a landmine, a missile, or an AI algorithm—has ever been held responsible for its actions. The focus, therefore, should not be on the technology itself, but on the people and processes that govern its use. "The argument of accountability on behalf of a system is neither here nor there, because ultimately, decisions, and the responsibilities of those decisions, always sit at the human level," The Conversation asserts.
Wong’s speech, and the ongoing debate it has sparked, serves as a reminder that while AI may change the tools and tactics of warfare, it does not absolve humans of their ethical and legal responsibilities. The technology’s promise—whether in protecting soldiers, improving situational awareness, or reducing civilian harm—will only be realized if it is matched by rigorous oversight and a clear chain of accountability.
As AI continues to reshape the nature of conflict, the world’s eyes are on policymakers, military leaders, and technologists to ensure that innovation does not come at the expense of responsibility. The future of warfare, it seems, will be defined not just by the capabilities of machines, but by the values and judgment of the people who wield them.