Today : Nov 17, 2025
Technology
09 October 2025

AI Supercharges Global Cybercrime As OpenAI Sounds Alarm

OpenAI’s new report details how hackers, scammers, and foreign states are using ChatGPT and other AI tools to accelerate phishing, malware, and propaganda efforts worldwide.

Artificial intelligence has been woven into the fabric of daily life, powering everything from voice assistants to customer service chatbots. But as AI models like ChatGPT become more sophisticated and accessible, a darker trend is emerging: foreign adversaries and cybercriminals are increasingly harnessing these tools to supercharge phishing, malware, scams, and propaganda campaigns. The latest findings from OpenAI and cybersecurity analysts paint a picture of adversaries bolting AI onto their existing playbooks, not to reinvent the wheel but to spin it faster than ever before.

On October 7, 2025, OpenAI published its third quarter security report, "Disrupting malicious uses of AI: an update," shining a spotlight on how threat actors—ranging from nation-state hackers to scam cartels—are exploiting AI for nefarious ends. According to OpenAI, since February 2024, it has actively monitored and blocked malicious uses of its models, disrupting over 40 networks for violating usage policies. The report, detailed in Cybernews, HackerNoon, and Hackread, documents a global surge in AI-powered attacks, with perpetrators hailing from Russia, North Korea, China, Cambodia, Myanmar, and Nigeria.

“We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models,” OpenAI stated in its security blog. This sentiment is echoed across the cybersecurity community. As Hackread reports, adversaries are spreading their operations across multiple AI systems—using ChatGPT for reconnaissance and planning while relying on other models for execution and automation. The core tactics haven’t changed; what’s new is the speed and scale AI affords.

Take, for instance, Russian-speaking criminal accounts. OpenAI found that these actors used multiple ChatGPT accounts to prototype and troubleshoot malware components, including remote-access trojans and credential stealers. While the AI models refused direct malicious prompts, users cleverly extracted functional code snippets to assemble their tools elsewhere. “We found no evidence that access to our models provided these actors with novel capabilities or directions that they could not otherwise have obtained from multiple publicly available resources,” OpenAI reported. The goal wasn’t to invent new cyberweapons but to refine and accelerate existing ones.

Korean-language operators, meanwhile, leveraged ChatGPT for command-and-control development, cryptocurrency-themed phishing, HTML obfuscation, and even proxying reCAPTCHA to create convincing login pages, according to Cybernews and Hackread. Each account often handled specific technical tasks—like browser extension conversion or VPN configuration—mirroring the structure of a corporate development team. The result? Streamlined, scalable cyberattacks that can be managed by smaller, more agile teams.

Chinese-language actors also entered the fray, using ChatGPT to generate detailed and formulaic phishing content in multiple languages, plan encrypted command-and-control components, assist with malware debugging, and conduct reconnaissance. Their activities coincided with campaigns targeting academia, think tanks, and the semiconductor sector, as reported by Volexity and Proofpoint. OpenAI observed that these operators were technically competent but unsophisticated, sticking to tried-and-true methods while seeking efficiency gains through AI.

Organized crime networks have not been left behind. Scam operations traced to Cambodia, Myanmar, and Nigeria used ChatGPT to translate messages, write fake investment pitches, and handle the logistics of large-scale scam centers. In Nigeria, for example, scammers posed as trading experts or job recruiters, luring victims into private messaging groups where all the chat content was generated by AI to create an air of authenticity. Another network designed entire fake online investment firms, complete with fabricated employee biographies, using ChatGPT as their creative engine. The majority of interactions, however, were relatively simple—translation, content generation, and basic phishing scripts—demonstrating that the criminal playbook remains largely unchanged, just turbocharged.

State-linked abuses of AI are equally concerning. OpenAI discovered Chinese government-linked accounts using ChatGPT to draft proposals for large-scale social media monitoring systems and to profile activists. One user requested help outlining a "High-Risk Uyghur-Related Inflow Warning Model," aiming to track individuals through travel and police data. While the AI models only returned public data, the intent behind these requests raised serious concerns about surveillance and civil liberties.

Influence operations are also evolving. Russian and Chinese actors have used AI to produce propaganda videos and social media posts related to geopolitical disputes. The notorious Russian "Stop News" campaign resurfaced, using ChatGPT to write scripts for short news-style videos and social media posts praising Russia and criticizing Western nations. Meanwhile, the "Nine emdash Line" operation, linked to China, generated English and Cantonese posts criticizing the Philippines, Vietnam, and Hong Kong democracy activists. These campaigns sought advice from AI on boosting engagement through TikTok challenges and hashtags, though most of their posts failed to gain significant traction before being suspended.

Interestingly, OpenAI noted that some threat actors are becoming wise to the telltale signs of AI-generated content. For example, several savvy operators began removing em-dashes from ChatGPT outputs before publishing their text, attempting to evade AI-detection tools. This cat-and-mouse game underscores the growing sophistication of both attackers and defenders in the AI era.

But it’s not all doom and gloom. OpenAI’s monitoring revealed a silver lining: ChatGPT is being used to detect scams about three times more often than it is used to create them. Everyday users are turning to AI to verify suspicious messages, helping to prevent fraud and deception. When malicious activity is detected, OpenAI bans offending accounts and shares insights with partners to disrupt ongoing threats. “We are dedicated to identifying, preventing, and disrupting attempts to abuse our models for harmful ends,” the company said.

Cybersecurity experts, however, warn that AI-powered attacks pose unique risks. As Evan Powell, CEO of DeepTempo, explained to Hackread, “Cybersecurity defences are uniquely vulnerable to AI-powered attacks. Today’s defences are almost entirely based on static rules: if you see A and B while C, then that’s an attack and take action. Today’s AI attackers train their systems to avoid these fixed pattern detections, which allows them to slip into enterprises and government systems at an increasing rate.” Powell added that AI boosts the productivity of attackers, enabling individuals to carry out operations that once required a well-funded organization or nation-state. “The implications are terrifying.”

New malicious AI tools like SpamGPT and MatrixPDF are also surfacing, helping cybercriminals bypass email security filters and turn ordinary PDF files into malware. The arms race between attackers and defenders shows no signs of slowing, as both sides adapt to the AI-driven landscape.

As OpenAI and its partners continue to monitor, disrupt, and publicize these threats, the hope is that increased awareness and improved protections will help tip the balance in favor of everyday users. For now, the message is clear: AI is reshaping the cyber threat landscape, and vigilance is more crucial than ever.