Technology
AI-Powered Phishing Attacks Surge Across Schools And Businesses
Cybercriminals leverage artificial intelligence to craft undetectable phishing campaigns, targeting schools and organizations while exploiting third-party vulnerabilities for larger financial impact.
6 min read
In recent years, the cybersecurity landscape has undergone a dramatic transformation, with artificial intelligence (AI) emerging as both a powerful tool for defenders and a formidable weapon for attackers. Nowhere is this shift more evident than in the alarming rise of AI-driven phishing attacks targeting schools, businesses, and critical infrastructure across the globe. As organizations scramble to adapt, experts warn that the rules of engagement have changed—and the stakes have never been higher.
According to a February 2026 analysis by Acronis, cybercriminals are no longer merely dabbling in AI; they have fully integrated it into their operational workflows. The company’s Cyberthreats Report for the latter half of 2025 found that phishing accounted for a staggering 83% of all email-borne threats, with email-based attacks rising 16% per organization and 20% per user year over year. Managed service providers (MSPs), often seen as the backbone of digital infrastructure for many businesses, bore the brunt of these attacks, with phishing comprising 52% of all incidents targeting them.
But schools, particularly K–12 institutions, have found themselves uniquely vulnerable in this new era of cyberthreats. As reported on February 18, 2026, AI-powered phishing campaigns are increasingly impersonating superintendents and principals, leveraging real details scraped from district websites and public communications. "The message can reference a real meeting or deadline to create urgency while attaching a malicious document to perform further compromise," cybersecurity expert Clark explained. The sophistication of these attacks is such that the emails "look normal and relatable," he noted, making it increasingly difficult for even well-trained staff to spot the telltale signs of a scam.
The core of the problem lies in generative AI’s uncanny ability to churn out near-duplicate phishing messages that evade traditional detection systems. As Syn, another expert in the field, put it: "It’s just similar enough that we can definitely feel the pain, but it’s different enough that the automation that we have in place cannot just find those and rip them out." AI’s prowess in open-source intelligence means attackers can personalize spear phishing attacks at scale, targeting not just one individual but millions—each with messages tailored to their specific context.
This wave of AI-driven social engineering exploits the very culture of trust and openness that underpins K–12 education. "Schools are built on trust and openness, and attackers take advantage of that," Clark observed. With large, diverse user populations, high staff turnover, shared devices, and a wealth of public information online, schools have become a "honeypot for AI-driven social engineering." Unlike hospitals, which are bound by strict privacy regulations, schools thrive on visibility and collaboration—ironically, the very qualities that now make them easy targets.
It’s not just education feeling the heat. The Acronis report highlights how attackers have shifted focus to collaboration platforms—think messaging, document sharing, and virtual meetings—as secondary attack channels. Advanced attacks on these platforms soared from 12% in 2024 to 31% in 2025. Security teams have responded by tightening identity and access controls, but attackers are adapting just as quickly, tailoring their social engineering techniques to fit these new digital environments.
Meanwhile, the 2026 Cyber Threat Landscape Report from Dataminr underscores the escalating danger posed by third-party vulnerabilities. In 2025, one in four data breaches exploited a third-party vulnerability—often a software flaw in a vendor’s product—escalating risk by 20% compared to direct internal attacks. Alarmingly, 96% of these vulnerabilities were weaponized within the same year they were disclosed, frequently bypassing internal detection and resulting in twice the data impact per incident. The financial fallout can be enormous, with moderate-risk breaches sometimes costing organizations $50 million to $100 million or more.
Phishing remains the most common intrusion vector, responsible for 60% of breaches, and AI is now behind a massive 80% of these attacks worldwide in 2025. Dataminr’s report also revealed that 30% of cyber intrusions involved the use of valid credentials—often stolen via phishing—rather than traditional break-ins. This trend toward credential theft makes it even harder for organizations to distinguish legitimate users from malicious actors.
Attackers are also leveraging AI in more insidious ways. Acronis documented cases of AI-enabled scams designed to intensify psychological pressure, such as virtual kidnapping schemes that use AI-generated "proof of life" images to terrorize victims. Criminal groups like GLOBAL GROUP and GTG-2002 have operationalized AI for everything from reconnaissance and data exfiltration to managing ransomware negotiations across multiple victims.
Ransomware itself remains a central feature of the threat landscape, with nearly 150 MSP and telecom organizations targeted and over 7,600 publicly disclosed victims worldwide in 2025. The most active ransomware groups—Qilin, Akira, and Cl0p—collectively racked up thousands of victims, with the United States recording the highest number by country. Supply chain attacks exploiting remote monitoring and management tools like AnyDesk and TeamViewer affected more than 1,200 third-party and supply chain victims, further highlighting the interconnected nature of modern cyber risk.
In response, cybersecurity experts are calling for a multi-pronged defense. Syn cautions against relying solely on technology to "outsmart" AI: "If we just try the stalemate of ‘Can we one up the AI?’ we’re probably going to lose," he said. Instead, he advocates for "human-only trust signals"—personalized passphrases or out-of-band verification (like a phone call to confirm a suspicious request)—that AI cannot easily replicate. Clark echoes the need for layered technical defenses, including strong identity protection, continuous risk-based access controls, and advanced email and endpoint detection.
The Dataminr report also points to a shift in organizational behavior, with 63% of companies refusing to pay ransoms in 2025, up from 59% the previous year. This trend, coupled with the move toward fewer but more devastating attacks, signals a new era in cyber risk management—one where resilience, governance, and oversight of AI tools become paramount. As Gartner’s recent predictions for 2026 suggest, the only way forward is to anticipate threats, automate defenses, and build systems robust enough to withstand both traditional and AI-driven attacks.
As AI voices and deepfake videos grow ever more convincing, the challenge for defenders is clear: adapt quickly, think creatively, and never underestimate the ingenuity of those on the other side. The future of cybersecurity will be shaped not just by new technologies, but by the timeless human capacity for trust, verification, and resilience.
Sources
- AI-Driven Phishing Is Putting K–12 Schools at Risk — Technology Solutions That Drive Education
- AI-driven phishing surge dominates 2025 cyberattacks — SecurityBrief UK
- Report: 1 in 4 Data Breaches Exploit Third-Party Vulnerabilities - Tech.co — Tech.co