Start your day with intelligence. Get The OODA Daily Pulse.
“In 2024, one billion people around the world will go to the polls for national elections. From the US presidential election in 2024 to the war in Ukraine, we’re entering the era of deepfake geopolitics, where experts are concerned about the impact on elections and public perception of the truth.
Project Liberty, in a 2023 e-mail newsletter, explored “deepfakes and state-sponsored disinformation campaigns, what it means for the future of geopolitical conflict, and what we can do about it.”
Coming to a screen near you
Deception for geopolitical gain has been around since the Trojan horse. Deepfakes, however, are a particular form of disinformation that has emerged recently due to advances in technology that generate believable audio, video, and text intended to deceive.
Generative AI tools like Midjourney and OpenAI’s ChatGPT are being used by hundreds of millions each month to generate new content (ChatGPT is the fastest-growing consumer application in history), but they are also the tools used to create deepfakes.
Henry Adjer, an independent AI expert told WIRED, ‘To create a really high-quality deepfake still requires a fair degree of expertise, as well as post-production expertise to touch up the output the AI generates. Video is really the next frontier in generative AI.’” (1)
Fake vids, real geopolitics
Even if deepfake videos aren’t perfect, they’re already being used to shape geopolitics. The war in Ukraine could have gone very differently had Ukrainian soldiers believed the deepfake video from March 2022 of President Zelenskyy calling on his Ukrainian soldiers to lay down their arms.
The video was quickly diagnosed as a deepfake and taken down from social media: Zelenskyy’s accent was off, and both the audio and video had signs of doctoring.
It could only be a matter of time before deepfakes are used to escalate conflict between China and Taiwan (Taiwan receives more fake news online than any other country in the world, according to the Digital Society Project). In February, The New York Times reported on the first instance of a state-aligned disinformation campaign when the Chinese government used deepfake videos to create entirely fake personas of broadcasters to advance pro-China views, where both voice and image were 100% computer-generated.
CEO Sundar Pichai says Google is intentionally limiting Bard AI’s public capabilities
Google CEO Sundar Pichai says Google is intentionally limiting Bard AI’s capabilities as he warns about the imminent ease of using AI to create deceptive “deepfake” videos of public figures.
Details:
AI misinformation and scams are already very real and will only worsen as AI continuously advances. (2a)
“Deep fakes”—a term that first emerged in 2017 to describe realistic photo, audio, video, and other forgeries generated with artificial intelligence (AI) technologies—could present a variety of national security challenges in the years to come. As these technologies continue to mature, they could hold significant implications for congressional oversight, U.S. defense authorizations and appropriations, and the regulation of social media platforms.
Though definitions vary, deep fakes are most commonly described as forgeries created using techniques in machine learning (ML)—a subfield of AI—especially generative adversarial networks (GANs). In the GAN process, two ML systems called neural networks are trained in competition with each other. The first network, or the generator, is tasked with creating counterfeit data—such as photos, audio recordings, or video footage—that replicate the properties of the original data set. The second network, or the discriminator, is tasked with identifying the counterfeit data. Based on the results of each iteration, the generator network adjusts to create increasingly realistic data. The networks continue to compete—often for thousands or millions of iterations—until the generator improves its performance such that the discriminator can no longer distinguish between real and counterfeit data.
Though media manipulation is not a new phenomenon, the use of AI to generate deep fakes is causing concern because the results are increasingly realistic, rapidly created, and cheaply made with freely available software and the ability to rent processing power through cloud computing. Thus, even unskilled operators could download the requisite software tools and, using publically available data, create increasingly convincing counterfeit content.
Deep fake technology has been popularized for entertainment purposes—for example, social media users inserting the actor Nicholas Cage into movies in which he did not originally appear and a museum generating an interactive exhibit with artist Salvador Dalí. Deep fake technologies have also been used for beneficial purposes. For example, medical researchers have reported using GANs to synthesize fake medical images to train disease detection algorithms for rare diseases and to minimize patient privacy concerns.
Deep fakes could, however, be used for nefarious purposes. State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately. Doing so could, in turn, erode public trust, negatively affect public discourse, or even sway an election. Indeed, the U.S. intelligence community concluded that Russia engaged in extensive influence operations during the 2016 presidential election to “undermine public faith in the U.S. democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency.” Likewise, in March 2022, Ukrainian President Volodymyr Zelensky announced that a video posted to social media—in which he appeared to direct Ukrainian soldiers to surrender to Russian forces—was a deep fake. While experts noted that this deep fake was not particularly sophisticated, in the future, convincing audio or video forgeries could potentially strengthen malicious influence operations.
Today, deep fakes can often be detected without specialized detection tools. However, the sophistication of the
technology is rapidly progressing to a point at which unaided human detection will be very difficult or impossible. While commercial industry has been investing in automated deep fake detection tools, this section describes U.S. government investments and activities.
The Identifying Outputs of Generative Adversarial Networks Act (P.L. 116-258) directed NSF and NIST to support research on GANs. Specifically, NSF is directed to support research on manipulated or synthesized content and information authenticity, and NIST is directed to support research for the development of measurements and standards necessary to develop tools to examine the function and outputs of GANs or other technologies that synthesize or manipulate content.
In addition, DARPA has had two programs devoted to the detection of deep fakes: Media Forensics (MediFor) and Semantic Forensics (SemaFor). MediFor, which concluded in FY2021, was to develop algorithms to automatically assess the integrity of photos and videos and to provide analysts with information about how counterfeit content was generated. The program reportedly explored techniques for identifying the audio-visual inconsistencies present in deep fakes, including inconsistencies in pixels (digital integrity), inconsistencies with the laws of physics (physical integrity), and inconsistencies with other information sources (semantic integrity). MediFor technologies are expected to transition to operational commands and the intelligence community.
Figure 1. Example of Semantic Inconsistency in a GAN-Generated Image
Source: Uncovering the Who, Why, and How Behind Manipulated Media (darpa.mil)
SemaFor seeks to build upon MediFor technologies and to develop algorithms that will automatically detect, attribute, and characterize (i.e., identify as either benign or malicious) various types of deep fakes. This program is to catalog semantic inconsistencies—such as the mismatched earrings seen in the GAN-generated image in Figure 1, or unusual facial features or backgrounds—and prioritize suspected deep fakes for human review. DARPA requested $18 million for SemaFor in FY2024, $4 million under the FY2023 appropriation. Technologies developed by both SemaFor and MediFor are intended to improve defenses against adversary information operations. (2b)
AI-generated content is emerging as a disruptive political force just as nations around the world are gearing up for a rare convergence of election cycles in 2024.
Why it matters: Around one billion voters will head to polls in 2024 across the U.S., India, the European Union, the U.K. and Indonesia, plus Russia — but neither AI companies nor governments have put matching election protections in place.
State of play: Election authorities, which are often woefully underfunded, must lean on existing rules to cope with the AI deluge.
How it works: AI could upend 2024 elections via…
Social media platforms, meanwhile, are cutting back on their election integrity efforts.
Between the lines: Newer platforms have little experience of big elections, let alone six in one year, and fewer local offices than more established rivals.
Of note: Secretary of State Antony Blinken announced in a speech Tuesday that the State Department has developed an AI-enabled content aggregator “to collect verifiable Russian disinformation and then to share that with partners around the world.”
What they’re saying: Katie Harbath, who led Facebook’s election efforts from 2013 to 2019, and is now a consultant, told Axios “the 2024 election is going to be exponentially more challenging than it was in 2020 and 2016.”
On the government side, efforts to grapple with AI are just beginning.
On Thursday, June 13, 2019 at 9:00 am, the House Permanent Select Committee on Intelligence convened an open hearing on the national security challenges of artificial intelligence (AI), manipulated media, and “deepfake” technology. This is the first House hearing devoted specifically to examining deepfakes and other types of AI-generated synthetic data. During this hearing, the Committee examined the national security threats posed by AI-enabled fake content, what can be done to detect and combat it, and what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, “post-truth” future.
Witnesses:
Ideally, a renewed version of the failed legislation itemized below – and some of the recommendations for Congress from the CRS – will take flight sooner rather than later.
This governmental activity still begs the question: What role will the private sector play to address this eminent threat? And what responsibility does the individual have to civic and digital media literacy to think critically and discern these threats?
Overview: To combat the spread of disinformation through restrictions on deep-fake video alteration technology.
Status: Died in a previous Congress
This bill was introduced on April 8, 2021, in a previous session of Congress, but it did not receive a vote.
Although this bill was not enacted, its provisions could have become law by being included in another bill. It is common for legislative text to be introduced concurrently in multiple bills (called companion bills), re-introduced in subsequent sessions of Congress in new bills, or added to larger bills (sometimes called omnibus bills).
Summary: The summary below was written by the Congressional Research Service, which is a nonpartisan division of the Library of Congress, and was published on Sep 21, 2021.
Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2021 or the DEEP FAKES Accountability Act:
Some analysts have noted that algorithm-based detection tools could lead to a cat-and-mouse game, in which the deep fake generators are rapidly updated to address flaws identified by detection tools. For this reason, they argue that social media platforms—in addition to deploying deep fake detection tools—may need to expand the means of labeling and/or authenticating content. This could include a requirement that users identify the time and location atwhich the content originated or that they label edited content as such.
Other analysts have expressed concern that regulation of deep fake technology could impose undue burden on social media platforms or lead to unconstitutional restrictions on free speech and artistic expression. These analysts have suggested that existing law is sufficient for managing the malicious use of deep fakes. Some experts have asserted that responding with technical tools alone will be insufficient and that instead the focus should be on the need to educate the public about deep fakes and minimize incentives for creators of malicious deep fakes.
https://oodaloop.com/archive/2023/07/11/virtual-event-at-330-pm-est-today-tuesday-7-11-23-the-pentagons-chief-digital-and-artificial-intelligence-officer-one-year-in/
https://oodaloop.com/archive/2023/06/12/every-business-leader-should-know-about-the-recent-deep-fake-experience-of-bill-browder/
https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/
https://oodaloop.com/archive/2023/02/21/ooda-almanac-2023-useful-observations-for-contemplating-the-future/
The OODA C-Suite Report: Operational Intelligence for Business Leaders
https://oodaloop.com/archive/2022/03/24/you-be-the-judge-deepfakes-enter-the-information-warfare-ecosystem/
https://oodaloop.com/archive/2021/12/09/dod-announces-new-chief-digital-and-artificial-intelligence-officer-cdao/
https://oodaloop.com/archive/2023/07/09/new-capability-from-openai-can-improve-anyones-ability-to-analyze-data/
https://oodaloop.com/ooda-original/2019/01/03/ai-will-test-american-values-in-the-battlefield/
https://oodaloop.com/archive/2022/12/05/we-are-witnessing-another-inflection-point-in-how-computers-support-humanity/
https://oodaloop.com/archive/2022/12/12/the-great-gpt-leap-is-disruption-in-plain-sight/
https://oodaloop.com/ooda-original/disruptive-technology/2022/12/22/ooda-loop-2022-the-past-present-and-future-of-chatgpt-gpt-3-openai-nlms-and-nlp/