Start your day with intelligence. Get The OODA Daily Pulse.
The advent of artificial intelligence has correctly ushered in concerns over its abuse by hostile threat actors to leverage the technology to enhance the speed of their attacks and further advance the efficacy of their operations. These tools are being designed to tailor personalized content and generate templates to deceive their targets with their professionalized looks and polished deliveries. Indeed, AI is being used by both cybercriminals and state actors alike as defenders race to harness AI to support their own efforts in identifying and mitigating AI-enabled, as well as other, cyber threats. This is the new reality and one that is progressing at an alarming rate that it has caused the private sector much consternation as to whether it is prepared to address the threats posed by AI to their organizations. Proof to point, a recent report revealed that more than 20% of U.S. business were ill-prepared for AI risks.
However, while the private sector struggles to address these challenges, governments too are scrambling to figure out how they will best incorporate cyber attacks into their war chests, and integrate them into their more conventional warfighting tactics. According to a Google report, there were at least 50 states using AI for “research, troubleshooting code, and creating and localizing content,” though it acknowledged that some were trying to exploit generative AI for more nefarious purposes like conducting influence operations. One such tool that continues to increase concern is the weaponization of video, image, and voice deepfake content, which continues to become more advanced the more AI is developed and refined.
A CrowdStrike report revealed that there was a significant (442%) surge in AI-enabled impersonation attacks (e.g., phishing, vishing, smishing) attacks the first half of 2024, suggesting that threat actors are seeking other attack vectors and channels to achieve their objectives. These tactics are deeply troubling considering they are designed to attack the very devices that most people have and rely upon for both personal and professional use, particularly as individuals are using one device for both purposes. Per two sources, the percentage of individuals using one smart phone for both business and pleasure ranged between 60-80%, making targeting those devices important for threat actor operations.
Recently, a mishap at the national level revealed how dangerous the compromise of a senior government official’s device can be. In this case, there was an error in adding an individual to a Signal chat that was not supposed to be on it and where sensitive discussions were held about air strikes. And while that incident may have well been a simple mistake, the fact that senior officials were using smart phones and a specific app to communicate directly with one another during a classified operation bears noting. One notable aspect that emerged from this incident is the fact that while Signal encrypts messages in transit, if a phone has already been compromised, at attacker can read a decrypted message. But perhaps more disconcerting is the fact that there are already instances where state actors have been observed trying to target specific persons via Signal to gain access to their accounts, after which could be used to collect information, or engage in more nefarious activities.
In this type of scenario, the threat actors could use the compromised accounts from which to execute more subtle attacks using deepfake audio or video communications. Because an application like Signal offers state-of-the-art encryption, it is presumed secure and by extension, any information traversing through these protected channels authentic. A compromised phone would provide attackers with access to Signal (or any other next generation secure communication application) from which a crafty attacker could create false audio/visual content for the purposes of achieving a specific objective such as disseminating disinformation/misinformation, issuing false directives, or executing any other information-borne attack, depending on its purpose.
Seeing how senior government officials rely on an application like Signal for secure communications, it is fairly easy to see how an adversarial state actor could use such access to its benefit. Given the volume of video and audio content that could be harvested from public sources of these individuals, it would be fairly easy to replicate their images and voices with advanced deepfake technology. Recently, a criminal attempted to reach out to at least three foreign ministers and a governor impersonating Secretary of State Marc Rubio using deepfake technology, an incident that was deemed not a prank or a stunt suggesting more nefarious purposes driving the attack. To say this is a wakeup call is an understatement.
It is not too much of a leap to believe that adversaries would be looking to combine these two types of attacks – the compromise of “secure” communication channels and deploying sophisticated deepfakes – to corrupt a high-ranking government or military decision maker’s OODA Loop cycle, impacting his ability to determine and execute an appropriate course of action for a given situation. One only needs look at some of the more sophisticated and stealthier cyber attacks to know imagination is the only limitation. Stuxnet, SolarWinds, and Operation Buckshot Yankee (the U.S. response to the attack that led to the creation of U.S. Cyber Command) have been nothing short of creative, complicated, and most importantly, successful. Is the use of deepfakes, which are becoming increasingly more difficult to detect, in such a manner that far-fetched?
Deepfakes are highly realistic synthetic media that have already proven how they can be weaponized to spread disinformation, incite unrest, and compromise critical communications, potentially leading to widespread societal disruption. And while the United States has an AI strategy in place, it would behoove it to advance its AI defense preparedness throughout the civilian and military command chain and anticipate not only the unexpected, but the improbable as well.