Today : Oct 11, 2025
World News
11 October 2025

AI Transforms Ukraine War As Both Sides Escalate

Russian hackers and autonomous drones drive a new era of warfare in Ukraine, as both nations embrace artificial intelligence for cyberattacks and battlefield dominance.

On the war-torn plains and cities of Ukraine, a new and unsettling arms race is unfolding—not of tanks or missiles, but of algorithms and artificial intelligence. As Ukraine’s State Service for Special Communications and Information Protection (SSSCIP) reported, Russian hackers are increasingly deploying AI in their relentless cyberattacks. But the cyber battlefield is only one front; both sides are now racing to harness AI for everything from drone warfare to intelligence analysis, fundamentally changing the nature of modern conflict.

In the first half of 2025, Ukraine logged a staggering 3,018 cyber incidents, according to SSSCIP’s latest analytical report—a notable rise from 2,575 incidents in late 2024. The surge was especially pronounced in attacks targeting local authorities and military entities, while incidents in the government and energy sectors saw a decline. SSSCIP’s specialists, together with the National Cyber Incident Response Team and CERT-UA, observed a marked shift in the tactics, techniques, and procedures of Russian threat actors. The report notes, “A radical change in tactics, techniques, and procedures, the involvement of ‘fresh blood’ in attacks, indicate a decrease in the effectiveness of already known methods due to our effective countermeasures.”

Behind the statistics are a cast of shadowy Russian-linked hacking groups—UAC-0219, UAC-0218, UAC-0226, and UAC-0227—each employing increasingly sophisticated AI-powered tools. UAC-0219, for instance, is believed to leverage AI to generate PowerShell scripts for data theft and screenshot capture, while UAC-0218 has ramped up its phishing campaigns. These emails, often disguised as links to E-Disk archives on UKR.NET, contain password-protected Office files and encrypted scripts that deliver the HOMESTEEL malware, designed to quietly siphon off sensitive files.

UAC-0226, meanwhile, has targeted Ukraine’s defense, government, and law enforcement sectors, embedding malicious attachments that unleash Reverse-shell and GIFTEDCROOK malware. The latter is engineered to extract browser data and transmit it to hacker-controlled Telegram chats. CERT-UA has also tracked UAC-0227 since at least March 2025, observing its focus on local governments, critical infrastructure, and the Central Communications Commission. According to the SSSCIP, “To implement the threat, attackers send emails with various content. After trying different approaches to delivering the ransomware, the hackers settled on distributing an SVG file, which is a vector image that opens in a web browser by default.”

The sophistication doesn’t stop at phishing. Russia-linked APT28 has exploited multiple cross-site scripting (XSS) vulnerabilities in popular webmail platforms Roundcube and Zimbra (CVE-2023-43770, CVE-2024-37383, CVE-2025-49113, CVE-2024-27443, CVE-2025-27915), enabling so-called zero-click attacks that require no user interaction—a chilling prospect for organizations already on high alert. SSSCIP’s report grimly concludes, “The use of legitimate online resources for malicious purposes is not a new tactic. However, the number of such platforms exploited by Russian hackers has been steadily increasing in recent times.”

Yet, the digital domain is only half the story. On October 10, 2025, Ukrainian forces intercepted a new kind of threat: a Russian drone, cobbled together but menacing in its capability. As Serhiy Beskrestnov, a consultant to Ukraine’s defense forces, observed after examining the device, “This technology is our future threat.” What set this drone apart was its autonomy. Assisted by AI, the drone could independently locate and attack targets, operating in radio silence and rendering traditional jamming tactics useless.

It’s not just Russia. Both sides have begun integrating AI into their military arsenals. Ukrainian Deputy Defence Minister Yuriy Myronenko told the BBC, “Our military gets more than 50,000 video streams [from the front line] every month which are analysed by artificial intelligence. This helps us quickly process this massive data, identify targets and place them on a map.” The scale is staggering—AI sifts through a deluge of raw footage, transforming it into actionable intelligence in real-time.

The battlefield is evolving rapidly. Ukrainian troops now deploy AI-based software to enable drones to lock onto targets and autonomously fly the final stretch of their missions. These drones, small and nimble, are nearly impossible to shoot down and, crucially, immune to electronic jamming. As Yaroslav Azhnyuk, CEO of Ukrainian tech firm The Fourth Law, explained, “All a soldier will need to do is press a button on a smartphone app. The drone will do the rest, finding the target, dropping explosives, assessing the damage and then returning to base. And it would not even require piloting skills from the soldier.”

Such advances could transform air defense as well. Interceptor drones, guided by AI, promise to bolster Ukraine’s ability to counter Russian long-range attack drones like the infamous Shaheds. “A computer-guided autonomous system can be better than a human in so many ways,” Azhnyuk said. “It can be more perceptive. It can see the target sooner than a human can. It can be more agile.” Myronenko indicated that these systems are already partly implemented and could be deployed in the thousands by the end of 2026.

But the promise of AI-powered warfare comes with grave risks. Ukrainian developers remain wary of fully autonomous weapons, particularly over the danger of friendly fire. Vadym, whose company DevDroid manufactures remotely controlled machine guns with AI-powered person detection, noted, “We can enable it, but we need to get more experience and more feedback from the ground forces in order to understand when it is safe to use this feature.” The fear is that, without human oversight, AI might fail to distinguish between friend and foe—especially when uniforms are similar—or inadvertently target civilians.

There are also legal and ethical minefields. How will AI-driven weapons systems comply with the rules of war? Can they reliably identify soldiers who wish to surrender or avoid harming non-combatants? Myronenko insists that the final decision should always rest with a human, even as AI makes those decisions “easier to decide.” But as the technology spreads, there is no guarantee that every actor will adhere to international humanitarian norms.

The stakes are already high. In June 2025, Ukraine’s “Spider Web” operation—where 100 drones targeted Russian air bases—was likely enabled by AI tools. Many in Ukraine fear that Russia will adopt similar tactics, not just at the front but deep into Ukrainian territory. Addressing the United Nations in September 2025, Ukrainian President Volodymyr Zelensky sounded the alarm, warning that AI was fueling “the most destructive arms race in human history.” He called for urgent global rules to govern the use of AI in weapons, equating the issue’s importance to that of nuclear non-proliferation.

The AI arms race in Ukraine is not just a story of machines and code. It’s a contest of wits, ethics, and survival, with the world watching closely—and nervously—as the rules of modern warfare are rewritten in real time.