Start your day with intelligence. Get The OODA Daily Pulse.
The Center for Security and Emerging Technologies (CSET) offers this sobering reality:
“Artificial intelligence (AI), specifically machine learning (ML), is poised to amplify disinformation campaigns—influence operations that involve covert efforts to intentionally spread false or misleading information.”
AI and disinformation are the timely subjects of a new series of policy briefs from the CSET. Part I of the series, AI and the Future of Disinformation Campaigns, Part 1: The RICHDATA Framework, was just released. Disinformation is not new, of course, but the scale and severity seem to have reached a zenith, broadsiding contemporary politics, public health policy, and many other domains. You name it, disinformation is in the mix scrambling truth and reality.
This CSET policy brief is yet another offering in the growing marketplace of ideas that is rising to the disinformation challenge. Government Agencies, think tanks, and academics everywhere are in what can be characterized as a ‘naming and framing’ phase of trying to wrestle the problem to the ground and/or tackle it.
Think of the period between the metaphorical description of the internet as the “information superhighway” in the mainstream media in 1992 through the licensing of the Mosaic browser by Microsoft to create Internet Explorer 1.0 to play catch up with Netscape (1995). It took roughly three years of ‘framing and naming’ the potential of the internet through the creation of a “stack” for the deployment of market-driven, scalable applications.
The other more recent analogy would be the Project Warp Speed mRNA vaccine development time cycle, from the onset of the pandemic in February 2020 to the first shots in arms in December 2020 (11 months). These are both tough analogies, as it is safe to say that there will not be a holistic equivalent of the web browser/Html or the mRNA vaccine platform that applies to the disparate challenges created by disinformation campaigns.
Both analogies also had clear institutional knowledge and research momentum in the form of computer science innovation and government-sponsored basic research in genomics and biotechnology, respectively. These efforts were, to a certain extent, structured and funneled. They had market logic surrounding their development. Conversely, the exploration of solutions to disinformation are emanating from a variety of disciplines and knowledge domains – and is neither centralized nor holistic; neither structured nor funneled.
CSET is amongst a handful of best in class think tanks and academic institutions taking a leading role in shaping a broad, policy-driven response to mis- and disinformation, including the Center for Humane Technology, the Institute for Rebooting Social Media at the Berkman Klein Center for Internet & Society at Harvard University, The Stanford Internet Observatory, and the Technology and Social Change Research Project at the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School. The Aspen Institute effort, Aspen Digital’s Commission on Information Disorder, just released their final report. DARPA is hard at work with the Influence Campaign Awareness and Sensemaking (INCAS) program and the Semantic Forensics (SemaFor) program.
Recent positive efforts to improve the information ecosystem include Facebook’s recent removal of detailed ad targeting options and The Filter Bubble Transparency Act pending in Congress and the awarding of the 2021 Nobel Peace Prize to Philipina journalist Maria Ressa (founder of Philippine news site Rappler) Russian journalist Dmitry Muratov (editor-in-chief of the Russian newspaper Novaya Gazeta), both of whom speak truth to power in autocratic countries and have taken up the disinformation crisis as activists.
According to Part 1 of the CSET series on AI and disinformation, they offer “a framework…to describe the stages of disinformation campaigns and commonly used techniques. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.”
The report is organized around case studies that provide more clarity on how these disinformation techniques function. The strongest takeaways and recommendations from the report are the systemic challenges that amplify troubling disinformation trends, including:
We will be offering an expanded analysis of the CSET framework as soon as Part 2 of the series is made available, which “examines how AI/ML technologies may shape future disinformation campaigns and offers recommendations for how to mitigate them.” Also, be on the lookout in the weeks ahead for an OODA Loop analysis of the recently released Commission on Information Disorder Final Report.
Please reach out to us if there is a particular aspect of these disinformation research efforts which you think is more insightful than other efforts or more value-driven for your business or organization – and tell us why. We are trying to puzzle out the threat posed by disinformation as best we can and we want to ensure our efforts are of value to the OODA Loop membership. We would really appreciate feedback.
A direct link to the CSET Report: AI and the Future of Disinformation Campaigns – Center for Security and Emerging Technology
Now more than ever, organizations need to apply rigorous thought to business risks and opportunities. In doing so it is useful to understand the concepts embodied in the terms Black Swan and Gray Rhino. See: Potential Future Opportunities, Risks and Mitigation Strategies in the Age of Continuous Crisis
The OODA leadership and analysts have decades of experience in understanding and mitigating cybersecurity threats and apply this real-world practitioner knowledge in our research and reporting. This page on the site is a repository of the best of our actionable research as well as a news stream of our daily reporting on cybersecurity threats and mitigation measures. See: Cybersecurity Sensemaking
OODA’s leadership and analysts have decades of direct experience helping organizations improve their ability to make sense of their current environment and assess the best courses of action for success going forward. This includes helping establish competitive intelligence and corporate intelligence capabilities. Our special series on the Intelligent Enterprise highlights research and reports that can accelerate any organization along its journey to optimized intelligence. See: Corporate Sensemaking
This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process. This includes a special guide for executives seeking to make the most of AI in their enterprise. See: Artificial Intelligence Sensemaking
From the very beginning of the pandemic, we have focused on research on what may come next and what to do about it today. This section of the site captures the best of our reporting plus daily intelligence as well as pointers to reputable information from other sites. See OODA COVID-19 Sensemaking Page.
A dynamic resource for OODA Network members looking for insights into the current and future developments in Space, including a special executive’s guide to space. See: Space Sensemaking
OODA is one of the few independent research sources with experience in due diligence on quantum computing and quantum security companies and capabilities. Our practitioner’s lens on insights ensures our research is grounded in reality. See Quantum Computing Sensemaking.
In 2020, we launched the OODAcast video and podcast series designed to provide you with insightful analysis and intelligence to inform your decision-making process. We do this through a series of expert interviews and topical videos highlighting global technologies such as cybersecurity, AI, quantum computing along with discussions on global risk and opportunity issues. See: The OODAcast