Use of Gemini by government groups: what analysts have observed
23 February 15:39
Google has reported an attempt to clone its Gemini language model using a so-called distillation attack, a method that allows the model’s logic to be reproduced by sending a large number of requests through its API.
This is stated in the quarterly report of the Google Threat Intelligence Group, according to "Komersant Ukrainian".
How a distillation attack works
According to the company, unknown individuals sent more than 100,000 requests to Gemini. The goal was not to break the infrastructure, but to reproduce the model’s behavior — its language patterns, response logic, and ability to work in different languages.
This approach is known in the industry as “model extraction” or “distillation” — essentially copying the model’s knowledge by analyzing its responses.
Google emphasized that it considers this a form of intellectual property theft and a violation of its terms of service.
A new cyber threat to AI
The problem goes beyond one company. OpenAI previously reported to US lawmakers that the Chinese company DeepSeek allegedly used covert methods to obtain the results of leading American AI models in order to train its own systems.
Industry experts acknowledge that model theft is becoming a new type of cyber risk. Unlike traditional hacking, attackers are not trying to destroy the system, but to “extract” its knowledge — the results of years of expensive research.
Use of Gemini by state-sponsored groups
The report also notes that at the end of 2025, Gemini was used by groups associated with states, including China, Iran, North Korea, and Russia.
Among them:
- Iranian APT42 — for preparing social engineering campaigns;
- Chinese APT31 and UNC795 — for analyzing vulnerabilities and configuring malicious code;
- North Korean UNC2970 — for gathering intelligence on defense and cybersecurity companies.
Google said that the relevant accounts had been blocked and the information collected had been used to strengthen security.
AI inside malicious software
Separately, the company reported the discovery of new malicious software called HONESTCUE, which integrated the Gemini API directly into its code to generate and execute malicious commands.
The COINBAIT phishing kit and the Xanthorox service, which positioned itself as a separate AI platform but actually used commercial solutions, including Gemini, were also detected.
What this means for the market
The incident highlights a new dilemma for companies providing AI as a service: API openness accelerates innovation, but at the same time creates the risk of model copying.
Google emphasizes that providers need to:
- closely monitor API access patterns;
- limit mass automated requests;
- implement control over model responses;
- adapt protection to the rapid evolution of threats.
The artificial intelligence market is increasingly facing not only ethical and regulatory issues, but also a direct struggle for technological superiority.