Start your day with intelligence. Get The OODA Daily Pulse.
According to a recent report by Google, there were more than 50 threat actors tied to China, Iran, North Korea, and Russia that the company observed using artificial intelligence (AI) technology powered by Google’s Gemini to facilitate their nefarious cyber and information-enabled operations. These actors leveraged the technology to support different phases of the cyber attack cycle, and activities ranged from malicious coding to payload development, to information collection against targets, vulnerability research, and assisting threat actors evade detection after compromising a victim. Despite the worrisome aspects of these revelations, Google noted that many of these activities were still in experimentation mode with the actors not developing new capabilities. It appears that threat actors are still figuring out ways to maximize generative AI to their benefit, as most of the incidents related in the report revealed how generative AI facilitates faster more efficient operations rather than focusing on pure disruption. While this may offer temporary relief, this will undoubtedly change soon.
Per its report, Google observed Chinese threat actors using Gemini for target research and reconnaissance; vulnerability research; scripting and development; and translation and explanation. Particularly significant was that Chinese threat actors used Gemini to “work through scripting and development tasks,” with the intent to facilitate a more robust access into a victim network. Given China’s longstanding history of cyber espionage, and now interest in gaining and sustaining access into critical infrastructure networks, it appears that these actors are turning to generative AI to glean solutions to challenges they may have encountered in the past. While some attempts were unsuccessful (e.g., reverse engineering endpoint solution of a well-known vendor), they do reveal that the actors are looking to exploit generative AI capabilities in a variety of ways, new territory that is eagerly being explored.
The fact that state actors are looking to take advantage of generative AI is neither surprising nor unexpected. But China’s involvement is interesting to note given the fact that China has delivered its own brand of generative AI known as DeepSeek. The Chinese DeepSeek has quickly emerged as a top competitor to already established offerings by U.S. firms like OpenAI (ChatGPT) and Google (Gemini). Within a week of its launch, DeepSeek became the most downloaded free application in the United States as well as the world, a testament to the appetite for generative AI even with so many free alternatives available. One reason behind this may be the company’s claim that DeepSeek’s R1 model was developed at a fraction of the cost of other competing brands, though with the same results, and despite the United States curbing chip exports to China. A recent comparison of DeepSeek, ChatGPT, and Gemini found that DeepSeek outperformed the others standard tests used to test AI platforms, which was largely aided by the ability to implement “chain-of-thought” reasoning, which helped break down and manage multifaceted undertakings. While there are nuanced differences between them, one thing is clear: DeepSeek has made an immediate impact on the international market.
One question that stands out is if DeepSeek is as capable as some of the reviews of it have been, why do Chinese threat actors need to use alternatives? In at least one incident that Google tracked, Chinese actors tried to see if they could get Gemini to reveal details like “IP address, kernel version, and network configuration.” Certainly, such questions are disconcerting, especially given China’s alleged activities of compromising technology to facilitate cyber espionage practices. Granted, such information could be an attempt to help Chinese companies improve their own generative AI product, but it could also enable a threat actor in potential future exploitation attempts of the technology, further reinforcing the benefits of states harnessing AI capabilities.
And that has raised alarms of warning. Recently, U.S. President Trump called DeepSeek’s immediate success and subsequent impact on stock losses for the U.S. sector a “wake up” call, and several countries have already banned its use for various security reasons. Interestingly, in the midst of its rollout, at the end of January DeepSeek experienced a couple of cyber attacks against its infrastructure, according to China. One attack was a distributed denial of service intent on disrupting DeepSeek services with subsequent attacks were brute force in nature, trying to crack user IDs in an effort to perhaps understand how DeepSeek works. China blamed the U.S. hackers for the attack, though it stopped short pointing to government culpability. The fact that the threat actors purported to discover the workings of the DeepSeek platform is interesting in its own right and echoes the type of exploitation Chinese threat actors sought.
The nation state rush to use AI has been expected though how to address the challenges of adversary use of this advanced technology looks to largely rely on the very technology itself. According to a recent report, 41% of global technology and data leaders expect the volume of cyber threats to increase due to AI’s rapid adoption, and as of the end of 2024, an estimated 50% of businesses had implemented some measure of AI into their cybersecurity process. It remains to be seen whether attackers or defenders will have an advantage in this space, especially as attackers become more fully adept at maximizing generative AI to not just enhance current operations but create never seen before attacks.
Fortunately, the global community has taken notice. In September 2024, the United States, Europe, the United Kingdom, and several other countries signed a treaty to ensure that AI is “developed and decommissioned in ways that respect human rights, , support democratic institutions, and uphold the rule of law.” However, the treaty’s tenets apply to all AI systems save for those used for national security or defense, an interesting exemption when it comes to worrying about how states can use AI when it comes to their own self interests. It’s difficult enough for network defenders to adopt AI defenses at the same pace attackers– whether it be cybercriminals, hacktivists, or just run of the mill hackers – are deploying them. Now, when state actors are aggressively jumping into the fray, this exponentially makes things more taxing, especially if states are creating their own AI that they can weaponize. So, while the dual use aspect of AI can be used to support both sides of a state’s cyber program, there is always the fear that a state can better leverage AI for their own attack interests that may keep others from signing. Currently only 10 countries have signed it (no ratifications as of yet) not including the European Union, so many may wait and see how this unfolds before committing to the treaty.
A common thread for bolstering cybersecurity globally is international cooperation, sharing threat information, best practices, common standards adoption, and implementing adaptable regulations to address evolving cyber threat environment. This strategy has continued to be an ongoing process with varying amounts of success. Whether this can be applied to the AI space is a question mark. While what happens in cyberspace is fast, the speed with which AI implementation is already impacting the cyber landscape will challenge multinational efforts to find a way forward in a timely manner. Worse, debate of how to address the AI space is still relatively fresh and being contested by all public and private interests that think they know better. One thing seems certain: in the changing dynamics of great power competition, states find competing for dominance in this field a strategic imperative that will position them for future success. The race may be on, but it’s anyone’s game to win.
Join The OODA Network For Deeper Insights and Peer-To-Peer Dialog. Subscribers receive:
Subscribe to OODA
$30
per month
Most Popular
Subscribe to OODA Loop
$300
per year
Apply to Join the OODA Network
$895
per year