Start your day with intelligence. Get The OODA Daily Pulse.
Artificial Intelligence (AI) dominated the tech landscape during the past year integrating into business with companies seeking to improve their operations, work more efficiently, automate wherever possible, and generally enhance their operations. This should continue for the foreseeable future with many cybersecurity and technology resources predicting that AI is and will be a game changer for both legitimate enterprises as well as hostile actors seeking to leverage AI to bolster their own activities, albeit for nefarious purposes. The race has already started between network defenders and their criminal counterparts as to who can operationalize AI better, and thereby imparting advantage over the other. With cybercrime becoming the world’s third largest economy in 2024, it is easy to see how threat actors can amp up their campaigns by harnessing AI’s power. Some experts expect that AI will fuel cybercrime in 2025, driving attack types like phishing and creating credible looking social engineering content to fool even the most alert and diligent target. Given that AI-generated content was used to facilitate more than USD $12 billion in losses in 2023, and is anticipated to triple over two years, these concerns are well founded.
One area where AI is expected to make a significant impact over the next few years is the deepfake manipulation of images, videos, and audio. While first starting off as a means of entertainment, the advancements made in deepfake technology has grown alarmingly more polished, making it exceptionally difficult to differentiate the difference between legitimate and fabricated content. As is the case with much of AI, the technology is proving to be a dual edged sword that can be leveraged as an asset to detect fakes, as well as create them. Detection and authentication technologies are helping to identify fake media without having to asses its legitimacy by comparing it to the original, as well as embedding digital watermarks or metadata to prove media authenticity. In an era of “fake news” and “disinformation/misinformation” deepfakes are fast becoming less of a novelty, and more of a legitimate threat tactic whose success has far-reaching implications for the integrity of how content is created, published, and disseminated to a large or small target audience.
What’s particularly worrisome about deepfakes is that many people assert their ability to recognize deepfake videos, audios, and images which may be an overestimation given the increasing sophistication of the technology to create content. A 2023 study across four countries found that even though 72% of consumers expressed concern about being fooled by deepfakes, more than half of them believed that they would be able to identify deepfake content. Given the increased production of deepfakes, such sureness is disconcerting, especially given how some of the more advanced state cyber actors like China, Iran, and Russia have been tied to some of the more legitimate looking deepfake content disseminated during U.S. election cycles. Though there is some debate to the extent that deepfakes influenced voters, the fact that more than half of Americans get at least some of their news from social media channels makes deepfake production an inexpensive endeavor with potential high-value rewards if executed properly.
However, there are some skeptics to the deepfake effect. Of note, the World Economic Forum conducted its own research on how deepfakes were deployed during the 2024 election and found that AI was not necessarily required to produce deceptive narratives. In fact, per its findings, half of the deepfakes it investigated were not deceptive at all, and that making political misinformation did not require AI at all to be at least as effective. While this is very true, it is the steady progression of deepfake content creation that makes it a particularly worrisome threat. The technology and its deployment are still at a rather nascent stage, and adversaries are still figuring out how to best leverage the tool for maximum benefit. One need only look at the history of phishing to see how it quickly evolved from “spray and pray” campaigns to become more focused as its operations evolved to concentrate on more selective targeting seen in spearphishing and whaling attacks. While less grand in scale, there is tremendous upside if conducted successfully.
Therefore, it would be irresponsible to cast off political misinformation deepfakes as something not to be concerned about simply because disinformation can be manufactured without using AI technology. Geopolitics is a huge driver of some of the more nefarious and sophisticated cyber-enabled activity. Whether used in tandem with kinetic conflicts like is occurring in Ukraine, or more surreptitious like the various TYPHOON activities that have sought to compromise key U.S. critical infrastructure networks. In the face of so many threats born from geopolitical tensions, it is no doubt that nearly 60% adjusted cybersecurity strategies to meet complex challenge, according to a recent World Economic Forum report on cybersecurity in 2025.
The abundant amount of news and literature on deepfakes has raised the public’s awareness of their use, which ameliorates a bad situation, especially during charged environments like elections. However, like its spearphishing cyber threat counterpart, the future of deepfake execution may be in its sparing use rather than producing content for mass consumption, to make it a targeted, specialized weapon. This seems like a natural evolution for the deepfake threat, especially for those governments well versed in conducting soft power, information-enabled attacks and campaigns. For example, an adversary may seek to share a polished deepfake that incriminates a high-ranking government official of one country taking action against the government that it is sharing the “find” in order to curry favor and achieve some political or economic objective. It’s not meant to go viral, only to influence the intended target. Another case could find a government using deepfake content as “evidence” of a transgression to give it justification for a political, economic, or military response. Deepfakes can be used by governments competing for favor and influence in developing countries, by painting their competitors in poor light and gaining an advantage.
Deepfakes are information manipulation that can be used for a variety of purposes, depending on the intent of the actor behind them, and it is clear that they can be deployed in more ways than what has been currently observed, and making the old adage “seeing is believing” almost obsolete. And while their risks have risen to the national level, their use against commercial interests and the private sector is worthy of close consideration, particularly as technology becomes more sophisticated and refined. Embarrassing or extorting an organization’s officers and stealing money are two ways deepfakes have already been successfully used against commercial targets, to name a few. Moving into 2025, it remains to be seen if detection technologies can be quickly adopted worldwide before hostile actors gain the upper hand. One thing is evident: the race has already started, and no one wants to be left at the starting line.
Join The OODA Network For Deeper Insights and Peer-To-Peer Dialog. Subscribers receive:
Subscribe to OODA
$30
per month
Most Popular
Subscribe to OODA Loop
$300
per year
Apply to Join the OODA Network
$895
per year