Today : Oct 28, 2025
Technology
28 October 2025

AI Deepfakes And Cyberattacks Spur Global Alarm In 2025

North Korean hackers, fake satellite images, and unchecked development fuel mounting concerns over artificial intelligence’s role in security, truth, and mental health.

On October 27, 2025, a chorus of warnings about the dangers of artificial intelligence (AI) echoed across the globe, sounding alarms from cybersecurity experts, government agencies, and ethicists alike. The message was clear: AI, in the wrong hands or used recklessly, poses a threat that spans from digital heists to the very fabric of truth in our societies, and even to our mental well-being.

In a candid interview with CoinDesk, Kostas Chalkias, a cryptographer at Mysten Labs, didn’t mince words: "Neural networks are the best tool I have ever had as a white-hat hacker. And you can imagine what happens when it falls into the wrong hands." Chalkias argued that artificial intelligence, especially when wielded by North Korean hacking groups, is a more immediate and dangerous threat to the cryptocurrency ecosystem than the much-hyped quantum computing. "There is no evidence that any computer today can break modern cryptography. That is at least 10 years away," he emphasized, dismissing fears of quantum attacks as premature.

The real peril, he explained, is how AI is turbocharging cybercrime. Groups like Lazarus, notorious for their links to North Korea, are now using large language models (LLMs) to automatically scan thousands of smart contracts, seeking vulnerabilities in minutes rather than days or weeks. This ability to combine data from past breaches and instantly identify similar weaknesses elsewhere has turned what was once a small, specialized cadre of state-sponsored hackers into something resembling a digital military-industrial complex. With AI, these groups can scale attacks with nothing more than a prompt.

Chalkias pointed out that decentralized finance (DeFi) platforms are especially exposed. Their open-source code is ripe for AI-powered analysis, allowing LLMs to scrutinize every line for logic flaws. "Each new release of GPT or Claude finds different weak spots. If you are not testing your system against them, you are already behind," he warned. He expects that regulators will soon require exchanges and smart contracts to undergo continuous, AI-aware audits to keep pace with these evolving threats.

But North Korea’s use of AI doesn’t stop at technical exploits. According to Chalkias, the regime is experimenting with AI-generated propaganda and deepfakes, but their most effective weapon remains social engineering—now amplified by AI’s ability to craft convincing lures and deceptions. "The DPRK will abuse AI for phishing, deepfakes and deception. That is their strength. They do not need quantum computers to hack crypto—they need artificial intelligence to make the attacks invisible," he concluded.

The scale of the threat is staggering. According to a report by the Multilateral Sanctions Monitoring Team (MSMT), associated with the United Nations, North Korean cybercriminals have stolen $2.84 billion in cryptocurrency since January 2024. A significant chunk of this haul came from the February 2025 attack on the Bybit exchange. And the methods are evolving: in May, a DPRK spy was uncovered among candidates for an engineering role at Kraken exchange, a clear violation of UN Security Council Resolutions 2375 and 2397, which prohibit hiring North Korean nationals.

Pyongyang’s reach is global. The MSMT report found that, as of early 2025, between 1,000 and 1,500 North Korean IT workers were based in China, with another 150 to 300 in Russia. There are plans to send more than 40,000 workers to Russia, including several IT delegations, using student visas arranged by Russian educational company ANO ‘HDK Cooperation’ in 2024. The proceeds from these operations, experts believe, fund North Korea’s military programs, buying everything from armored vehicles to missile systems. Cyber-espionage targets a range of critical industries, from semiconductor manufacturing to uranium processing.

Western nations aren’t sitting idle. Andrew Fierman, head of national security intelligence at Chainalysis, told Decrypt that "the capabilities of law enforcement, intelligence and the private sector to identify and neutralize risks have expanded significantly." In August 2025, an unknown user even managed to hack the account of a North Korean IT specialist linked to a $680,000 theft, a rare reversal in this high-stakes digital arms race.

But AI’s threat isn’t confined to the backrooms of cybercriminals or the shadowy world of state espionage. It’s also undermining public trust in the very images and information we see every day. As reported on October 27, 2025, the rise of AI-generated deepfake satellite images has made it easier than ever to create convincing fakes—images that, for decades, were considered ironclad sources of truth. Now, all it takes is free software and a few typed prompts to generate hyper-realistic satellite photos that can go viral in minutes.

Recent examples highlight the danger. In June 2025, Ukraine’s Operation Spiderweb saw drones strike Russian long-range bombers. Genuine high-resolution satellite photos of the aftermath spread quickly online, but so did fake images exaggerating the damage. That same month, following U.S. and Israeli strikes on Iranian nuclear-linked facilities, fake images and videos circulated showing destroyed Israeli F-35 jets and alleged Iranian missile responses—scenes that never actually occurred. The four-day India-Pakistan conflict in May saw both sides sharing fake satellite images to claim greater military success.

The impact of such fakes can be immediate and dramatic. In 2024, a fake image of a fire near the Pentagon caused a brief dip in the stock market before authorities clarified it was a hoax. With more than half the world’s population using social media, the reach and speed of these manipulated images are unprecedented.

Experts and governments are calling for a society-wide response. Media outlets are urged to verify satellite imagery rigorously and explain their verification processes to readers. Commercial providers are encouraged to offer tools or teams to authenticate images. Some countries are already moving in this direction: Sweden’s brochure "In Case of Crisis or War" and Finland’s guide on influence operations offer citizens advice on spotting disinformation. The U.S. Department of Defense’s Emergency Preparedness Guide mentions media awareness but, critics say, doesn’t go far enough in preparing the public for AI-generated fakes.

Underlying these specific threats is a broader concern about the unchecked development and deployment of AI. On the same day, GZERO World with Ian Bremmer aired a segment in which experts warned that companies are racing for market dominance and the elusive prize of artificial general intelligence (AGI), with little regard for the ethical or psychological risks. "AI is the most powerful, inscrutable and uncontrollable technology we've ever invented," said Tristan Harris, a prominent critic of big tech’s approach. He cautioned, "Why are we recklessly racing this out to society psychologically in ways that we definitely don't know what we're doing? This is just stupidity."

Harris and others pointed to risks ranging from psychosis to a loss of critical thinking, arguing that the incentives for rapid user growth and market dominance are overwhelming any sense of responsibility. Ethics, they say, are being tossed aside in the rush to hook as many users as possible.

As AI weaves itself deeper into our digital, political, and social lives, the need for vigilance, transparency, and a renewed focus on ethics has never been greater. The world is learning—sometimes the hard way—that the power of AI is matched only by the scale of its potential risks.