The cybersecurity landscape is witnessing an alarming and transformative shift as malicious actors increasingly exploit artificial intelligence (AI) technologies. A recent report from KELA Research reveals that mentions of malicious AI tools on cybercrime forums have surged by 200% in 2024 compared to previous years, illustrating how swiftly cybercriminals are adapting to leverage these advanced technologies.
The report, which compiled data from underground communities via its intelligence-gathering platform, also noted a 52% increase in discussions about jailbreaking legitimate AI models, including popular systems like OpenAI’s ChatGPT. This dual-edged threat represents a growing concern: not only are AI tools being weaponized against targets, but methods for circumventing their built-in safety measures are advancing rapidly.
According to KELA, cybercriminals are increasingly distributing and monetizing what the report describes as “dark AI tools,” which include jailbroken versions of public AI models and custom-built malicious applications. For instance, tools like WormGPT have emerged, designed specifically for activities such as phishing and business email compromise (BEC). These advances lower the entry barrier for less skilled attackers, allowing them to conduct complex operations at scale without sophisticated knowledge.
Yael Kishon, the AI product and research lead at KELA, emphasizes the necessity of recognizing this paradigm shift, stating, "We are witnessing a seismic shift in the cyber threat landscape. Cybercriminals are not just using AI – they are building entire sections in the underground ecosystem dedicated to AI-powered cybercrime." As organizations navigate this new terrain, the adoption of robust AI-driven defenses becomes critical.
On the phishing front, KELA’s report highlights that threat actors are leveraging generative AI to enhance their campaigns significantly. With the ability to generate convincing social engineering content, including deepfake audio and video, cybercriminals can impersonate high-level executives to deceive employees into authorizing fraudulent transactions. This kind of sophisticated manipulation presents grave challenges to traditional detection and response systems.
Throughout 2024, the number of compromised accounts on platforms supporting large language models (LLMs) has risen sharply. For example, ChatGPT experienced a staggering increase from 154,000 compromised accounts in 2023 to 3 million in 2024, translating to a growth of nearly 1,850%. Similarly, Gemini, formerly known as Bard, surged from 12,000 to 174,000 compromised accounts, amounting to a dramatic increase of 1,350%. These figures reflect the potential dangers of infostealer malware that targets user credentials, posing serious risks as more users engage with AI-driven platforms.
Research has identified specific jailbreaking techniques being shared on underground forums, enhancing malefactors' capabilities to bypass AI security measures. One such method is word transformation, which successfully circumvents 27% of safety tests by replacing sensitive terms with synonyms or by fragmenting them across messages.
The future threat landscape looks increasingly precarious. KELA anticipates new attack surfaces in 2025, particularly with technologies involving prompt injections and agentic AI – entities that can act autonomously and make decisions. These developments underscore the urgent need for organizations to implement stringent security measures. Recommended safeguards include secure LLM integrations, advanced deepfake detection technologies, and comprehensive user education on AI-related threats.
As KELA points out, the escalation of AI-powered cyber threats necessitates an adaptive approach to cyber defense. Organizations must invest in employee training, proactive threat monitoring, and integrated system solutions that employ AI-driven security measures such as automated intelligence-based red teaming and adversarial simulations designed for generative AI.
In summary, as malicious AI tools proliferate in the cybercriminal underworld, the imperative for reliable counters to these threats has never been greater. The ongoing evolution of AI presents a complex but crucial challenge that organizations must meet head-on to safeguard their assets.