Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > CSET Releases Part II of Series: AI and the Future of Disinformation Campaigns

Part II of the Center for Security and Emerging Technologies (CSET) series is available which “examines how AI/ML technologies may shape future disinformation campaigns and offers recommendations for how to mitigate them.”

We offered an analysis of Part I of the series, (CSET Introduces a “Disinformation Kill Chain”) earlier this month.  Disinformation is not new, of course, but the scale and severity seem to have reached a zenith, broadsiding contemporary politics, public health policy, and many other domains.  You name it, disinformation is in the mix scrambling truth and reality.

CSET RICHDATA Framework

 

 

 

In Part 1 f the CSET series on AI and disinformation, the authors offer “a framework…to describe the stages of disinformation campaigns and commonly used techniques. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns. In Part II of the series , “Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.”

CSET Part II Release Headline Reads “Not Your Mother’s Trolls: How AI May Supercharge Disinformation Campaigns”

Part II also offers this stark reality:

“The age of information enabled the age of disinformation. Powered by the speed and volume of the internet, disinformation has emerged as an instrument of strategic competition and domestic political warfare. It is used by both state and non-state actors to shape public opinion, sow chaos, and erode societal trust. Artificial intelligence (AI), specifically machine learning (ML), is poised to amplify disinformation campaigns—influence operations that involve covert efforts to intentionally spread false or misleading information.  Our findings show that the use of AI in disinformation campaigns is not only plausible but already underway.”

The report is framed around the following risks:

  • ML algorithms excel at harnessing data and finding patterns that are difficult for humans to observe.
  • The data-rich environment of modern online existence creates a terrain ideally suited for ML techniques to precisely target individuals.
  • Language generation capabilities and the tools that enable deepfakes are already capable of manufacturing viral disinformation at scale and empowering digital impersonation; and
  • The same technologies, paired with human operators, may soon enable social bots to mimic human online behavior and to troll humans with precisely tailored messages.

These risks combine with the following trends to exacerbate the threat posed by AI and disinformation campaigns:

  • The blurring lines between foreign and domestic disinformation operations;
  • The outsourcing of these operations to private companies that provide influence as a service;
  • The dual-use nature of platform features and applications built on them; and
  • Conflict over where to draw the line between harmful disinformation and protected speech.

Part II Recommendations

  • Develop technical mitigations to inhibit and detect ML-powered disinformation campaigns.
  • Develop an early warning system for disinformation campaigns.
  • Build a networked collective defense across platforms.
  • Examine and deter the use of services that enable disinformation campaigns.
  • Integrate threat modeling and red-teaming processes to guard against abuse.
  • Build and apply ethical principles for the publication of AI research that can fuel disinformation campaigns. The AI research community should assume that disinformation operators will misuse their openly released research. They should develop a publication risk framework to guard against the misuse of their research and recommend mitigations.
  • Establish a process for the media to report on disinformation without amplifying it.
  • Reform recommender algorithms that have empowered current campaigns.
  • Raise awareness and build public resilience against ML-enabled disinformation.

A couple of highlights from these recommendations:

We are encouraged by the ‘go big or go home’ tone of these CSET recommendations, mirroring the broad call to leadership and immediate action of the  Commission on Information Disorder Final Report.  The CSET concludes “that a future of AI-powered campaigns is likely inevitable. However, this future might not be altogether disruptive if societies act now. Mitigating and countering disinformation is a Center for Security and Emerging Technology whole-of-society effort, where governments, technology platforms, AI researchers, the media, and individual information consumers each bear responsibility.”

Also, we want to point out more of the specifics offered by the CSET authors regarding integrating threat modeling and red-teaming processes to guard against abuse (both core competencies here at OODA):

“Platforms and AI researchers should adapt cybersecurity best practices to disinformation operations, adopt them into the early stages of product design, and test potential mitigations prior to their release.”

If you have any questions and concerns about disinformation, threat modeling, and red teaming processes, please contact us.

Related Reading:

Black Swans and Gray Rhinos

Now more than ever, organizations need to apply rigorous thought to business risks and opportunities. In doing so it is useful to understand the concepts embodied in the terms Black Swan and Gray Rhino. See: Potential Future Opportunities, Risks and Mitigation Strategies in the Age of Continuous Crisis

Cybersecurity Sensemaking: Strategic intelligence to inform your decisionmaking

The OODA leadership and analysts have decades of experience in understanding and mitigating cybersecurity threats and apply this real world practitioner knowledge in our research and reporting. This page on the site is a repository of the best of our actionable research as well as a news stream of our daily reporting on cybersecurity threats and mitigation measures. See: Cybersecurity Sensemaking

Corporate Sensemaking: Establishing an Intelligent Enterprise

OODA’s leadership and analysts have decades of direct experience helping organizations improve their ability to make sense of their current environment and assess the best courses of action for success going forward. This includes helping establish competitive intelligence and corporate intelligence capabilities. Our special series on the Intelligent Enterprise highlights research and reports that can accelerate any organization along their journey to optimized intelligence. See: Corporate Sensemaking

Artificial Intelligence Sensemaking: Take advantage of this mega trend for competitive advantage

This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process. This includes a special guide for executives seeking to make the most of AI in their enterprise. See: Artificial Intelligence Sensemaking

COVID-19 Sensemaking: What is next for business and governments

From the very beginning of the pandemic we have focused on research on what may come next and what to do about it today. This section of the site captures the best of our reporting plus daily daily intelligence as well as pointers to reputable information from other sites. See: OODA COVID-19 Sensemaking Page.

Space Sensemaking: What does your business need to know now

A dynamic resource for OODA Network members looking for insights into the current and future developments in Space, including a special executive’s guide to space. See: Space Sensemaking

Quantum Computing Sensemaking

OODA is one of the few independent research sources with experience in due diligence on quantum computing and quantum security companies and capabilities. Our practitioner’s lens on insights ensures our research is grounded in reality. See: Quantum Computing Sensemaking.

The OODAcast Video and Podcast Series

In 2020, we launched the OODAcast video and podcast series designed to provide you with insightful analysis and intelligence to inform your decision making process. We do this through a series of expert interviews and topical videos highlighting global technologies such as cybersecurity, AI, quantum computing along with discussions on global risk and opportunity issues. See: The OODAcast

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.