On October 7, 2025, OpenAI, the San Francisco-based AI powerhouse, announced a sweeping crackdown on ChatGPT accounts linked to suspected Chinese government operatives and other state-affiliated actors. The move, detailed in OpenAI’s latest public threat report, has ignited a fresh debate about the global stakes of artificial intelligence, the risks of authoritarian abuse, and the ongoing technological rivalry between the United States and China.
According to Reuters and other outlets, OpenAI revealed that several accounts, believed to be connected to Chinese government organizations, had been using ChatGPT to request help with designing tools for large-scale surveillance and social media monitoring. These requests, OpenAI said, directly violated its national security policies and triggered immediate bans. Among the most alarming cases: users sought assistance in creating social media “listening” tools, promotional materials for surveillance software, and even proposals for systems that could analyze the travel movements and police records of China’s Uyghur minority and other “high-risk” individuals.
“There’s a push within the People’s Republic of China to get better at using artificial intelligence for large-scale things like surveillance and monitoring,” Ben Nimmo, principal investigator at OpenAI, told CNN. “It’s not last year that the Chinese Communist Party started surveilling its own population. But now they’ve heard of AI and they’re thinking, oh maybe we can use this to get a little bit better.”
OpenAI’s report paints a vivid picture of how generative AI, once the stuff of science fiction, is now being harnessed for both mundane and deeply troubling purposes. While its models have turned down clearly harmful requests, the company found that state-backed and criminal actors are increasingly using AI to streamline their existing operations—be it refining the language in phishing emails, debugging malicious code, or making scam links more convincing.
One user, reportedly connected to a Chinese government entity, asked ChatGPT to help draft a proposal for a “High-Risk Uyghur-Related Inflow Warning Model”—a tool designed to track the movements of Uyghur individuals by analyzing transportation bookings and cross-referencing with police records. Another user sought help creating promotional materials for a tool that purportedly scans platforms like X (formerly Twitter), Facebook, Instagram, Reddit, TikTok, and YouTube for political, religious, or ethnic content, as well as so-called “extremist speech.” According to OpenAI, both users were banned for violating its strict policies.
OpenAI’s findings underscore a broader trend: authoritarian regimes are leveraging AI not to invent new forms of repression, but to make existing surveillance and censorship more efficient. “Adversaries are using AI to refine existing tradecraft, not to invent new kinds of cyberattacks,” Michael Flossman, a security expert at OpenAI, told reporters. The company’s report notes that persistent threat actors are even tweaking their methods to mask the telltale signs of AI-generated content, such as removing certain punctuation marks from their posts.
It’s not just China in the crosshairs. OpenAI says it has also shut down accounts linked to suspected Russian-speaking criminal organizations, North Korean hackers, and networks in Cambodia, Myanmar, Nigeria, and Iran. These actors have used ChatGPT for everything from developing malware—like remote access trojans and credential stealers—to running influence campaigns and creating online scams. In one notable operation this summer, OpenAI disrupted efforts in Iran, Russia, and China to use ChatGPT-generated posts and comments to stoke division and drive engagement across various social media platforms.
Despite these threats, OpenAI claims its models are more often part of the solution than the problem. “Our current estimate is that ChatGPT is being used to identify scams up to three times more often than it is being used for scams,” the company said in its report. As phishing attacks and online fraud become more sophisticated, would-be victims are increasingly turning to AI tools like ChatGPT to spot red flags and avoid falling prey to criminals.
Since launching its public threat reporting initiative in February 2024, OpenAI has disrupted and reported over 40 networks that violated its usage policies. The company emphasized that its AI models “consistently refused outright malicious requests,” and found “no evidence of new tactics or that our models provided threat actors with novel offensive capabilities.” Instead, most malicious actors are simply “building AI into existing workflows.”
OpenAI’s crackdown comes as the race for AI supremacy between the US and China continues to intensify. Chinese firm DeepSeek, for example, made headlines in January 2025 with the launch of its R1 model—a ChatGPT-like AI system touted as operating at a fraction of OpenAI’s cost. The emergence of such competitors has alarmed US officials and investors, who worry about the implications for both economic and national security.
The US government is hardly standing on the sidelines. According to CNN, US Cyber Command has pledged to “accelerate adoption and scale capabilities” in artificial intelligence, exploring how AI can be used for both offensive and defensive cyber operations. While OpenAI declined to comment on whether it was aware of US military or intelligence agencies using ChatGPT for hacking, it reiterated its commitment to supporting democracy and upholding strict usage policies.
For its part, the Chinese government has pushed back against OpenAI’s findings. Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, DC, told CNN, “We oppose groundless attacks and slanders against China.” Liu added that China is “rapidly building an AI governance system with distinct national characteristics,” emphasizing a balance between innovation, security, and inclusiveness, and pointing to new laws and ethical guidelines on algorithmic services, generative AI, and data security.
OpenAI’s own business fortunes have soared alongside these controversies. As of early October 2025, the company boasts more than 800 million weekly ChatGPT users. Just last week, OpenAI completed a secondary share sale that valued the firm at a staggering $500 billion—up from $300 billion earlier in the year. The deal allowed employees to sell over $10 billion in stock to a group of heavyweight investors, including Thrive Capital, SoftBank, Dragoneer Investment Group, Abu Dhabi’s MGX, and T. Rowe Price. According to The Information, OpenAI generated about $4.3 billion in revenue in the first half of 2025, marking a 16% jump over its total revenue from all of 2024.
As major tech companies compete fiercely for AI talent—Meta, for instance, recently hired Scale AI’s 28-year-old leader Alexandr Wang to run its new super intelligence division—the world is watching closely to see how the next chapter of the AI arms race unfolds. For now, OpenAI’s latest report serves as a stark reminder that while artificial intelligence can be a force for good, it’s also a powerful tool in the hands of those who would use it for surveillance, repression, and cybercrime.
With the stakes higher than ever, the challenge for technology companies, governments, and civil society alike is to ensure that AI remains a tool for progress—not oppression.