As the world grapples with the rapid evolution of technology and its intersection with conflict, recent events at the United Nations have highlighted both the promise and peril of digital tools in warfare and governance. Two pivotal moments—Israeli Prime Minister Benjamin Netanyahu’s controversial speech at the United Nations General Assembly (UNGA) and a heated debate at the UN Security Council (UNSC) on artificial intelligence—have cast a spotlight on the urgent need for global frameworks to manage these new realities.
Last month, on the grand stage of the UNGA, Netanyahu delivered a speech that reverberated far beyond the assembly hall. According to reporting by Shannon Raj Singh, Theodora Skeadas, and others, more than 100 diplomats from over 50 countries staged a dramatic walkout as Netanyahu took the podium. In his address, Netanyahu doubled down on Israel’s military campaign against Hamas, calling for their disarmament and the return of hostages. But it was not just what he said—it was how he threatened to say it to an entirely different audience that raised eyebrows and alarm bells.
Netanyahu claimed that, thanks to "special efforts by Israeli intelligence," his speech would be broadcast directly into Gaza, not only through "massive loudspeakers" positioned around the Strip but also by streaming the speech directly onto the cellphones of civilians living there. While Israeli authorities later clarified that they did not actually take control of Gazans’ phones, they did confirm the use of loudspeakers mounted on trucks along the border—and even inside Gaza itself—to blast Netanyahu’s words across the enclave. As reported by an Israeli government spokesperson, the operation was a coordinated effort between civilian elements and the Israel Defense Forces, designed to ensure Netanyahu’s "historic UN general assembly speech will be heard in the Gaza Strip."
This episode is just the latest in a long line of digital tactics deployed by Israel in Gaza. For years, human rights groups and the United Nations’ Office of the High Commissioner for Human Rights have documented Israel’s use of tracking, surveillance, and military planning technologies in the region. The Guardian, for example, reported in August 2025 that Israel’s Unit 8200—a military surveillance agency—used Microsoft’s Azure cloud platform to collect and store recordings of millions of calls made by Palestinians. These recordings, the report noted, were used to inform military operations in both Gaza and the West Bank.
But the loudspeaker operation, and the very public threat to broadcast directly to civilian devices, marked a chilling escalation in information warfare. As the article published by Raj Singh and colleagues emphasized, such tactics are not merely about informing the public—they are about coercion, intimidation, and psychological operations. The act of surrounding Gaza with loudspeakers blasting a foreign leader’s speech is a powerful symbol of control, aimed at signaling military dominance and undermining the morale of an already embattled population.
Perhaps even more troubling is the precedent these actions set. Forced messaging—whether through physical loudspeakers or digital intrusions—could easily be repurposed to compel evacuations, enforce compliance, or even disseminate genocidal propaganda. The article warns that, especially in light of pending genocide claims against Israel at the International Court of Justice, normalizing such tactics could have grave consequences for international law and human rights.
These concerns are hardly theoretical. The people of Gaza have endured at least 23 internet shutdowns since October 2023, severely disrupting their ability to access emergency services, communicate with loved ones, or share information about their plight. The specter of forced digital messaging only deepens fears of surveillance and repression, leading many to self-censor and withdraw from digital spaces. This, in turn, has a chilling effect on free expression and access to information—two lifelines in times of conflict and crisis.
International humanitarian law is clear: targeting civilians with harmful information, especially in ways that could cause psychological harm or undermine access to essential services, is prohibited. As the International Committee of the Red Cross has clarified, these rules apply to cyber and information operations just as surely as they do to conventional weapons. The right to privacy, enshrined in foundational human rights texts like the International Covenant on Civil and Political Rights, extends to digital communications and personal data—regardless of whether individuals are under a state’s direct jurisdiction.
Against this fraught backdrop, the United Nations Security Council convened on September 24, 2025, to debate the implications of artificial intelligence for international peace and security. Chaired by South Korean president Lee Jae Myung, the session underscored the dizzying pace at which technology is outstripping diplomatic consensus. "Eighty years ago, the UN’s central concern at its founding was how the international community would manage the emerging threat of nuclear weapons," Lee remarked. "Now it is time to explore new governance structures to address the new challenges and threats posed by AI."
The debate, which coincided with the opening week of the UNGA’s 80th session, saw world leaders and experts alike grapple with the dual-edged nature of AI. On the one hand, countries like France and the UK highlighted the transformative potential of AI for peacekeeping—enhancing early warning systems, streamlining logistics, and improving data analysis. Kenya and Guyana pointed to AI’s promise for strengthening health systems and climate response, both critical for the prevention and management of conflict. The optimism was palpable, with many seeing AI as a key driver of economic growth and progress toward the Sustainable Development Goals.
Yet the risks were impossible to ignore. Concerns ranged from AI-generated disinformation—capable of undermining democracies and endangering peacekeepers—to cyberattacks on critical infrastructure and the exacerbation of online extremism. Countries such as Somalia and Sierra Leone warned that the uneven spread of AI technology could leave vulnerable nations even more exposed, while Algeria highlighted how limited internet access and weak ICT regulations in Africa compound these challenges. The specter of "digital colonialism" loomed large, with several Global Majority countries cautioning against a future in which advanced economies wield disproportionate influence over AI development and governance.
The debate also touched on the military applications of AI, with particular attention to autonomous weapons systems (AWS). UN Secretary-General António Guterres reiterated his call for a ban on fully autonomous AWS operating without human control by 2026—a position echoed by many, though not all, member states. The United States, for instance, rejected efforts to impose centralized global governance of AI, while Russia argued that the Security Council’s involvement would duplicate existing initiatives. China advocated for a "people-centered approach" grounded in international law and shared values, and France urged collaboration with other AI governance processes, such as the AI Action Summits.
What emerged from the debate was not a consensus, but a recognition of the stakes. As Yejin Choi, a professor at Stanford University’s Institute for Human-Centered AI, put it, "the world is at an extraordinary inflection point." The question is not just how to harness AI for good, but who gets to decide the rules of the road—and how to ensure that the benefits are shared, rather than hoarded by the powerful.
In the end, both the events in Gaza and the deliberations at the UN point to a simple truth: technology is neither inherently good nor bad. It is a tool, shaped by the values, choices, and safeguards we put in place. As digital warfare and AI governance move ever closer to the heart of international affairs, the world faces a moment of reckoning—one that will determine not just the future of conflict, but the very fabric of global society.