Start your day with intelligence. Get The OODA Daily Pulse.
Part II of the Center for Security and Emerging Technologies (CSET) series is available which “examines how AI/ML technologies may shape future disinformation campaigns and offers recommendations for how to mitigate them.”
We offered an analysis of Part I of the series, (CSET Introduces a “Disinformation Kill Chain”) earlier this month. Disinformation is not new, of course, but the scale and severity seem to have reached a zenith, broadsiding contemporary politics, public health policy, and many other domains. You name it, disinformation is in the mix scrambling truth and reality.
In Part 1 f the CSET series on AI and disinformation, the authors offer “a framework…to describe the stages of disinformation campaigns and commonly used techniques. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns. In Part II of the series , “Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.”
Part II also offers this stark reality:
“The age of information enabled the age of disinformation. Powered by the speed and volume of the internet, disinformation has emerged as an instrument of strategic competition and domestic political warfare. It is used by both state and non-state actors to shape public opinion, sow chaos, and erode societal trust. Artificial intelligence (AI), specifically machine learning (ML), is poised to amplify disinformation campaigns—influence operations that involve covert efforts to intentionally spread false or misleading information. Our findings show that the use of AI in disinformation campaigns is not only plausible but already underway.”
The report is framed around the following risks:
These risks combine with the following trends to exacerbate the threat posed by AI and disinformation campaigns:
A couple of highlights from these recommendations:
We are encouraged by the ‘go big or go home’ tone of these CSET recommendations, mirroring the broad call to leadership and immediate action of the Commission on Information Disorder Final Report. The CSET concludes “that a future of AI-powered campaigns is likely inevitable. However, this future might not be altogether disruptive if societies act now. Mitigating and countering disinformation is a Center for Security and Emerging Technology whole-of-society effort, where governments, technology platforms, AI researchers, the media, and individual information consumers each bear responsibility.”
Also, we want to point out more of the specifics offered by the CSET authors regarding integrating threat modeling and red-teaming processes to guard against abuse (both core competencies here at OODA):
“Platforms and AI researchers should adapt cybersecurity best practices to disinformation operations, adopt them into the early stages of product design, and test potential mitigations prior to their release.”
If you have any questions and concerns about disinformation, threat modeling, and red teaming processes, please contact us.
Now more than ever, organizations need to apply rigorous thought to business risks and opportunities. In doing so it is useful to understand the concepts embodied in the terms Black Swan and Gray Rhino. See: Potential Future Opportunities, Risks and Mitigation Strategies in the Age of Continuous Crisis
The OODA leadership and analysts have decades of experience in understanding and mitigating cybersecurity threats and apply this real world practitioner knowledge in our research and reporting. This page on the site is a repository of the best of our actionable research as well as a news stream of our daily reporting on cybersecurity threats and mitigation measures. See: Cybersecurity Sensemaking
OODA’s leadership and analysts have decades of direct experience helping organizations improve their ability to make sense of their current environment and assess the best courses of action for success going forward. This includes helping establish competitive intelligence and corporate intelligence capabilities. Our special series on the Intelligent Enterprise highlights research and reports that can accelerate any organization along their journey to optimized intelligence. See: Corporate Sensemaking
This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process. This includes a special guide for executives seeking to make the most of AI in their enterprise. See: Artificial Intelligence Sensemaking
From the very beginning of the pandemic we have focused on research on what may come next and what to do about it today. This section of the site captures the best of our reporting plus daily daily intelligence as well as pointers to reputable information from other sites. See: OODA COVID-19 Sensemaking Page.
A dynamic resource for OODA Network members looking for insights into the current and future developments in Space, including a special executive’s guide to space. See: Space Sensemaking
OODA is one of the few independent research sources with experience in due diligence on quantum computing and quantum security companies and capabilities. Our practitioner’s lens on insights ensures our research is grounded in reality. See: Quantum Computing Sensemaking.
In 2020, we launched the OODAcast video and podcast series designed to provide you with insightful analysis and intelligence to inform your decision making process. We do this through a series of expert interviews and topical videos highlighting global technologies such as cybersecurity, AI, quantum computing along with discussions on global risk and opportunity issues. See: The OODAcast