Today : Mar 14, 2025
Technology
01 February 2025

State-Sponsored Hackers Exploit Google’s Gemini Chatbot

Iranian, Chinese, and North Korean hackers use AI tool to boost cyber operations without significant new capabilities.

Hackers from Iran, China, and North Korea are leveraging Google's Gemini chatbot to boost their cyber operations, according to a report from Google's Threat Intelligence Group (GTIG) published on January 31, 2025. This latest information reveals how state-sponsored actors are utilizing the AI tool to increase their productivity, though it has not yet facilitated significant advancements in their capabilities.

According to the GTIG report, government-backed attackers are employing Gemini for various tasks, including coding, scripting, and gathering intelligence on potential targets. “Government-backed attackers attempted to use Gemini for coding and scripting tasks, gathering information about potential targets, researching publicly known vulnerabilities, and enabling post-compromise activities,” the report stated.

Interestingly, Iranian hackers emerged as the most frequent users of the chatbot, employing it primarily for phishing campaigns and intelligence-gathering missions against defense experts and organizations. “Iranian hackers were the biggest users of Gemini, employing it to craft phishing campaigns or conduct reconnaissance on defense experts and organizations,” the report noted.

On the other hand, hackers from China are focusing their use of Gemini on debugging code and achieving enhanced access to their targets' networks. They engage with the chatbot for specific tasks, including lateral movement within systems, privilege escalation, and data exfiltration. The report noted, “They focused on topics such as lateral movement, privilege escalation, data exfiltration, and detection evasion.”

North Korean actors, meanwhile, have been observed using Gemini to create fake cover letters and research remote IT job opportunities within Western firms, potentially as part of broader infiltration strategies. The report indicates, “They also used Gemini to research topics of strategic interest to the North Korean government, such as the South Korean military and cryptocurrency.” This sophisticated tactic is consistent with previous findings, where U.S. officials pointed out North Korea's strategy of placing individuals within remote roles at U.S. firms using false identities.

Though somewhat limited, Russian hackers have also been utilizing Gemini, mainly focusing on coding tasks like translating malware and implementing encryption features. The GTIG report claims this group did not significantly expand their techniques during the period analyzed, stating, “Google says it did not see any indications of them developing novel capabilities.”

Overall, the report from Google provides valuable insights on how generative AI like Gemini is not fundamentally changing the capabilities of these hackers, instead allowing them to operate at higher efficiencies. “Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volumes,” the GTIG noted, pointing out the dual nature of modern AI tools aiding both innovation and exploitation.

The impact of AI tools on cybercrime is becoming increasingly relevant to cybersecurity experts. Analysts have long believed AI could significantly increase the volume and effectiveness of cyberattacks. A recent statement from the UK's National Cyber Security Centre supports this claim: "AI would increase the volume and heighten the impact of cyber attacks, but the overall impact would be uneven.”

While these hackers are finding productivity gains, Google emphasized the limitations of Gemini. The safeguards built within the AI prevent its use for more sophisticated and damaging attacks, such as accessing and manipulating Google's products directly. The report highlighted, “Current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors,” reflecting on the rapidly changing nature of AI technology.

Interestingly, the emergence of tools like Gemini has sparked discussions about the potential for monitoring and regulating the use of AI within cybersecurity contexts. The interplay between innovation and risk is now more evident than ever, challenging organizations and governments to protect against these increasingly sophisticated threat actors.

With hackers becoming more productive through tools like Gemini, the cybersecurity community must remain vigilant. The trends and behaviors outlined by Google's GTIG will undoubtedly inform future protective strategies and responses to state-sponsored cyber threats.

The Gemini chatbot's role is still developing, leaving many questions about how hackers might use it effectively moving forward and what countermeasures will be necessary to combat such exploitation.