AI News

Nation-State Hackers Weaponize Google Gemini: A New Era of AI-Driven Cyber Threats

February 12, 2026 – In a significant revelation that underscores the double-edged nature of artificial intelligence, Google’s Threat Intelligence Group (GTIG) and Google DeepMind have released a comprehensive report detailing how nation-state adversaries are systematically integrating Google Gemini into their cyberattack lifecycles.

The report, released today ahead of the Munich Security Conference, highlights a disturbing trend: Advanced Persistent Threat (APT) groups from China, Iran, and North Korea have moved beyond mere experimentation. These actors are now actively employing Generative AI to accelerate reconnaissance, refine social engineering campaigns, and even dynamically generate malicious code during active operations.

The Operational Shift: From Experimentation to Integration

For the past year, the cybersecurity community has warned of the potential for Large Language Models (LLMs) to lower the barrier to entry for cybercriminals. However, Google’s latest findings confirm that sophisticated state-sponsored groups are leveraging these tools to enhance efficiency and evasion capabilities.

According to the report, the usage of Gemini by these groups is not monolithic. Different actors have adopted the technology to suit their specific strategic goals, ranging from deep-dive open-source intelligence (OSINT) gathering to the real-time translation of phishing lures.

John Hultquist, chief analyst at GTIG, noted that while North Korean and Iranian groups were early adopters of AI for social engineering, Chinese actors are now developing more complex, agentic use cases to streamline vulnerability research and code troubleshooting.

Threat Actor Profile: How Nations Are Exploiting AI

The report provides a granular look at how specific APT groups are utilizing Gemini. The following table summarizes the key actors and their observed methodologies:

Summary of Nation-State AI Exploitation

Threat Group Origin Primary Targets Key Misuse of Gemini
APT42 (Charming Kitten) Iran Education, Govt, NGOs Translating phishing lures, refining social engineering personas, and drafting persuasive emails.
UNC2970 North Korea Defense & Aerospace Synthesizing OSINT to profile high-value targets; impersonating corporate recruiters.
TEMP.Hex (Mustang Panda) China Govt & NGOs (Pakistan/Europe) Compiling structural data on separatist organizations and specific individuals.
APT31 (Zirconium) China US Industrial/Political Sectors Using "expert cybersecurity personas" to automate vulnerability analysis and testing plans.

Iran: Refining the Art of Deception

APT42, a group historically associated with the Iranian Islamic Revolutionary Guard Corps (IRGC), has heavily integrated Gemini into its social engineering operations. Known for targeting researchers, journalists, and activists, APT42 uses the model to translate content and polish the grammar of phishing emails, making them indistinguishable from legitimate correspondence.

By feeding Gemini biographies of targets, the group generates tailored pretexts—scenarios designed to build immediate trust. This capability allows them to bridge language gaps and cultural nuances that previously served as red flags for potential victims.

North Korea: Industrial-Scale Reconnaissance

For the North Korean group UNC2970, AI serves as a force multiplier for espionage. The group targets the defense and aerospace sectors, often posing as legitimate recruiters to deliver malware.

Google’s analysis reveals that UNC2970 uses Gemini to scrape and synthesize vast amounts of data from professional networking sites (such as LinkedIn). The AI helps them map out organizational hierarchies, identify key technical personnel, and draft hyper-realistic job descriptions used in spear-phishing campaigns.

China: Automated Vulnerability Research

Chinese state-sponsored actors, including TEMP.Hex and APT31, have demonstrated some of the most technical applications of the technology. These groups have been observed using Gemini to troubleshoot their own malware code and research publicly known vulnerabilities.

In one alarming instance, a Chinese group utilized Gemini to simulate "expert cybersecurity personas." These AI agents were tasked with automating the analysis of software vulnerabilities and generating testing plans to bypass security controls on US-based targets. This suggests a move toward automated offensive operations, where AI agents assist in the planning phase of an intrusion.

The Rise of AI-Native Malware: "Honestcue"

Perhaps the most technical revelation in the report is the discovery of Honestcue, a malware strain identified in September 2025. Unlike traditional malware that carries its malicious payload, Honestcue functions as a hollow shell that relies on the cloud.

Honestcue leverages the Google Gemini API to dynamically generate and execute malicious C# code in memory. By offloading the malicious logic to an AI response, the attackers achieve two goals:

  1. Obfuscation: Traditional antivirus tools that rely on static file analysis struggle to detect the threat because the malicious code does not exist until the AI generates it.
  2. Polymorphism: The code generated by the AI can vary slightly with each execution, complicating signature-based detection.

This "living off the land" approach—where the "land" is now a cloud-based AI service—represents a significant evolution in malware development.

The "Jailbreak" Ecosystem and Model Theft

Beyond nation-state espionage, the report sheds light on the growing underground economy of "Jailbreak-as-a-Service." Cybercriminals are marketing tools that claim to be custom, uncensored AI models but are often merely wrappers around commercial APIs like Gemini or OpenAI.

One such tool, Xanthorox, advertises itself as a private, self-hosted AI for generating ransomware and malware. Google’s investigation, however, revealed that Xanthorox simply routes prompts through jailbroken instances of legitimate models, stripping away safety filters to deliver malicious content.

Furthermore, financially motivated groups are increasingly conducting Model Extraction Attacks (MEAs). These "distillation attacks" involve systematically probing a mature model like Gemini to extract its training patterns, effectively stealing the intellectual property to train cheaper, smaller clone models. While this does not compromise user data, it poses a severe threat to the competitive advantage of AI developers.

Google’s Defense and the Path Forward

In response to these findings, Google has taken aggressive action, disabling all identified accounts and assets associated with the APT groups mentioned in the report. The company emphasized that while adversaries are using Gemini for content generation and coding assistance, there is no evidence that the security of the Gemini model itself has been compromised.

"For government-backed threat actors, LLMs have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures," the report states.

Creati.ai notes that this development signals a permanent shift in the threat landscape. As AI models become more multimodal and agentic, the window between a vulnerability being discovered and exploited will continue to shrink. The integration of AI into offensive cyber operations is no longer a theoretical risk—it is the new standard of engagement.

For enterprise security teams, this necessitates a pivot toward behavior-based detection systems capable of identifying AI-generated anomalies, rather than relying solely on static indicators of compromise. As the arms race between AI-enabled attackers and AI-driven defenders accelerates, the integrity of the AI supply chain itself will likely become the next major battleground.

Featured