Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Ongoing Efforts to Combat Information Disorder and Strengthen our Cognitive Infrastructure

Last year, OODA Network Member Congressman Will Hurd was a Commissioner for the Aspen Institute’s Information Disorder Report.   Congressman Hurd will take part in a Keynote Conversation at OODAcon (which is the final event of the event to be held on Tuesday, October 18th).  In the run-up to the event next week, the following is an update on some of the research and project outcomes achieved by various global efforts working to understand and combat “information disorder” and build a strong cognitive infrastructure.

Working Definitions

The terms disinformation and misinformation are defined in a variety of ways.  The Information Disorder Commission employed the following definitions for the purposes of their final report.

Information disorder, coined by First Draft Co-Founder Claire Wardle, denotes the broad societal challenges associated with misinformation, disinformation, and malformation.

Disinformation is false or misleading information, intentionally created or strategically amplified to mislead for a purpose (e.g., political, financial, or social gain).

Misinformation is false or misleading information that is not necessarily intentional.

OODA Loop frames our research around the framework formulated by OODA CTO Bob Gourley, Cognitive Infrastructure, which is the mental capacities of a nation-state’s citizens and the decision-making ability of people, organizations, and government. It also includes the information channels used to inform decision-making capabilities and the education and training systems used to prepare citizens and organizations for critical thinking.  Our cognitive infrastructure is threatened in ways few of us ever imagined just a few years ago. Traditional propaganda techniques have been modernized and are now aided by advanced technologies and new information dissemination methods.

The Aspen Institute’s Commission on Information Disorder

Since the release of their final report in December 2021, the Commission has generated the following outputs:

August 2022:  Policy 101s: Mis- and Disinformation – Aspen Tech Policy Hub:  The Aspen Tech Policy Hub is pleased to release our fifth Policy 101, focused on Mis- and Disinformation. These 101s are a series of basic overviews of key tech policy issues of the day, written by Hub staff, alums, and friends. This document gives a brief overview of what constitutes mis- versus disinformation, the challenges of this content, and what different stakeholders are doing about it.  From the document:

What are policymakers doing about mis- and disinformation?:  In short, there is currently no comprehensive public policy in the US that addresses mis- and disinformation on the internet. Though federal agencies like the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) have rules to prevent some instances of mis- and disinformation spread through media, radio, and advertising, they do not exercise the same authority over the internet, where misinformation today spreads the fastest.

What are platforms doing about mis- and disinformation?:  Shielded by Section 230 of the Communications and Decency Act (see our Policy 101 here), companies have great discretion over how they manage mis- and disinformation on their platforms. Section 230 protects companies from being held accountable for third-party content on their platforms, allowing them to adjudicate the truthfulness of users’ posts as much or as little as they wish. Companies that choose to tackle mis- and disinformation are faced with two big challenges: identifying misinformation and figuring out what to do with it. To identify false information, many platforms use a combination of automated algorithms and human review.

Then, platforms like Facebook and Twitter “tag” certain posts to indicate to users that content might not be factual. Platforms have also iterated, with varying success, on features that “crowdsource fact-checking” by allowing users to annotate posts; warn users not to share links they haven’t opened; and apply extra restrictions to content related to elections and public health. Some platforms, including Twitter, have also adopted policies to remove users who repeatedly share disinformation. Partly in response, new platforms have promised not to remove users on the basis of their posts.

May 2022:  Aspen Tech Policy Hub Announces Alterea, Inc.’s “Agents of Influence” as Information Disorder Prize Competition Winner:   At a live pitch event involving four semi-finalists in front of a panel of judges, the Alterea, Inc. team has been awarded $75,000 to execute “Agents of Influence” to make meaningful progress toward ending information disorder. The team members are Anahita Dalmia, Jasper McEvoy, and Alex Walter.  “Our goal is that, by playing this game, students realize the impact information has on their worldview,” said Dalmia during the presentation. “By empowering people, particularly the next generation, we knew we could have a positive effect on . . . civic empowerment.”

The video game teaches middle and high schoolers to recognize misinformation, think critically, and make more responsible decisions. Through interactive narratives and games that teach counter-misinformation best practices, students save the fictional Virginia Hall High School from the plots of Harbinger, an evil spy organization using misinformation to manipulate the student body.   The award is the culmination of the Aspen Tech Policy Hub Information Disorder Prize Competition. The prize competition, launched in November 2021, asked applicants for unique and innovative projects aimed at combatting mis- and disinformation in direct connection to one or more of the newly announced 15 recommendations of the Aspen Institute’s Commission on Information Disorder.

The MIT AI Policy Forum:  Social Media Reform

The MIT AI Policy Forum (AIPF) is a global initiative at The MIT Schwarzman College of Computing, which was launched in 2018.  The Blackstone Group Chairman Stephen A. Schwarzman, donated $350 million of the $1.1 billion of funding committed to the school, which is the “single largest investment in computing and AI by an American academic institution.” What sets the AIPF apart from all other organizations dedicated to AI research and policy is its commitment to global collaboration moving from AI principles to AI practice. The leadership at the AIPF is committed to making a tactical impact.     For more on the MIT AI Policy Forum, see: From AI Principles to AI Practice at a Global Scale

The AI Policy Forum Summit 2022

The AIPF’s annual capstone event for 2022, The AI Policy Forum Summit, was held in September.  Following is an informative description of the event from the organizers:

“There is, of course, no shortage of discussion about AI at different venues, but we believe that the current public discourse would greatly benefit from a deeper and more focused inquiry. To this end, the AI Policy Forum Summit will involve exploration and in-depth discussions of critical questions and issues in this space, as well as consideration of possible future developments and concrete guidance for governments and companies on implementing AI-related policies.

Hosted by the MIT AI Policy Forum — an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of AI from principles to practical policy implementation — leaders from government, business, and academia will convene for a day-long dialogue focusing on the global policy challenges surrounding the deployment of AI in key areas such as the development of:

  • Truly trustworthy AI,
  • The challenge of making AI work for consumers in finance, and
  • Charting a viable path towards social media reform.” (3)

Panel 3 of the Summit took up this topic of Social Media Reform:

There is a growing consensus that social media not only has a major impact on our lives and society but also constitutes a domain that needs careful change and regulation. How should we go about such reform though? How to navigate here the complex interplay of technological, legal, and policy aspects? Panelists included:

  • Daron Acemoglu, Institute Professor, MIT
  • Martha Minow, 300th Anniversary University Professor, Harvard University
  • Alejandro Poiré, Dean, School of Government & Public Policy, Monterrey TEC
  • Asu Ozdaglar, Deputy Dean of Academics, MIT Schwarzman College of Computing; Department Head, EECS; moderator

The video of the panel can be found here: AI Policy Forum Summit 2022 (videos)

The CISA CSAC: Cognitive Infrastructure Research and Election Public Messaging

The Cybersecurity and Infrastructure Security Agency’s (CISA) continues to model an operational structure with an effective public/private partnership component that yields actionable results.  The latest success is the evolution of the CISA Cybersecurity Advisory Committee (CSAC which meets quarterly) and its subcommittees, specifically the time-sensitive work of the Protecting Critical Infrastructure from Misinformation and Disinformation (MDM) Subcommittee.  The CISA CSAC: Cognitive Infrastructure Research and Election Public Messaging is the anatomy of a CSAC subcommittee, including the mission statement formulated in December 2021, followed by the subcommittee’s quarterly updates, reports, and recommendations. The case study concludes with the recently released public service announcement from the FBI and CISA  – which demonstrates the value and impact of the work of the subcommittee since December 2021.

EU Disinfo Lab Update

EU DisinfoLab is a young independent NGO focused on researching and tackling sophisticated disinformation campaigns targeting the EU, its member states, core institutions, and core values.  The Lab publishes a EU Disinfo Lab Update Monthly:

Disinfo News & Updates

  • Putin’s surveillance network. This New York Times article details the activities of Russia’s internet regulator, Roskomnadzor – which was formed in 2008 to oversee radio signals, telecoms and the Russian mail service. Thanks to thousands of pages of leaked files, we get a better glimpse at how powerful it has become both in terms of internet oversight and as a weapon to be deployed in Putin’s propaganda campaigns.
  • Facebook experiment. Facebook gathered 250 people from five different countries to discuss solutions to climate misleading information circulating on the platform. While Facebook didn’t reveal what was decided, it stated that the sessions led to “high amounts of both participant engagement and satisfaction” and could open up opportunities for users to help write speech rules. In a post-survey, “80 percent of participants said Facebook users like them should have a say in policy development.”
  • Cases against social platforms. This Bloomberg article states that more than seventy lawsuits have been filed this year in the U.S. against Meta, Snap, TikTok, and Google claiming that Silicon Valley’s algorithms are causing adolescents and young adults real-world harm as a result of their addiction to social media.

EU policy monitor

  • Digital Services Act. The final version of the Digital Services Act was given Council’s approval today. The final signature by the European Parliament and the Council is expected on 19 October. After the signature, the text has to be published in the European Union Official Journal. It should enter into force 20 days after the publication in the Journal but will apply (for all) only 15 months after entering into force. Its application to VLOPs will be earlier, 4 months after their designation as such.
  • The Digital Markets Act. The text has been signed by the Presidents of the European Parliament and the Council. It should be published in the European Union’s Official Journal on 13 October.
  • Regulation on Political advertisement. All the amendments to the Internal Market (IMCO) Committee report have been published (Amendments 140-409 and Amendments 410-686).
  • European Media Freedom Act. The proposal was revealed by the European Commission on 16 September. Have a look at our initial reaction here. Discussions between the Audiovisual Working party and the European Commission have started, while the European Parliament is deliberating which Committee will be leading the work. The results of these discussions should be clear by the end of November. There are high chances that CULT will be in the driver’s seat, and LIBE/IMCO will be associated Committees. In the meantime, the Commission also launched a feedback process until 28 November to feed into the legislative debate.

The latest from EU DisinfoLab

  • Doppelganger. Our latest OSINT investigation exposes an operation linked to Russia-based actors who cloned legitimate media outlets from multiple European countries to spread disinformation designed to undermine the support to Ukraine.
  • Disinformation entrepreneurs. Never heard this expression? Dig into this piece which provides an analysis of disinformation entrepreneurs, i.e. YouTube users that have found the fastest way to grow their accounts by spreading pro-Russian disinformation on the war in the Ukraine.

What we’re reading or listening to!

  • Google autocomplete. The latest Crossover investigation demonstrates how French-speaking Belgians were hinted at searching for dubious sources when looking up the word “Donbass” in Google.
  • What a pixel can tell. Democracy Reporting International released a new report analysing  how Artificial Intelligence could be used to design images that support erroneous narratives, and the impact this could have on democratic public discourses.
  • Abortion rights. Jo Glanville talks with some leading activists and advocates including Venny Ala-Siurua, Executive Director at Women on Web, Lana Dimitrijevic, Lawyer and Founder of the Women’s Rights Foundation in Malta, and Judy Taing, Head of Gender and Sexuality at ARTICLE 19 about challenges and obstacles to protecting reproductive rights. But also about the role disinformation plays in disrupting these rights, and how can tech companies help.
  • Misinformation in 2022 Quebec elections. The Quebec Election Misinformation Project at McGill University’s Media Ecosystem Observatory has just published its mid-election analysis of misinformation during the Quebec provincial election. Their findings are available here.

Further Resources

For OODA Loop research and analysis on Cognitive Infrastructure, click here.

2022 Foundational Integrity Research request for proposals – Meta Research (facebook.com):  Over the last few years, we have increased our investment in people and technology to minimize the effects of negative experiences people encounter on our platforms. The effectiveness of these efforts relies strongly on our partnerships with social scientists to conduct foundational and applied research around challenges pertaining to platform governance in domains such as misinformation, hate speech, violence and incitement, and coordinated harm.

In this request for proposals (RFP), Meta is offering awards to global social science researchers interested in exploring integrity issues related to social communication technologies. We will provide a total of $1,000,000 USD in funding for research proposals that aim to enrich our understanding of challenges related to integrity issues on social media and social technology platforms. Our goal for these awards is to support the growth of scientific knowledge in these spaces and to contribute to a shared understanding across the broader scientific community and the technology industry on how social technology companies can better address integrity issues on their platforms. Research is not restricted to focusing on Meta apps and technologies.

Community-based strategies for combating misinformation: Learning from a popular culture fandom | HKS Misinformation Review (harvard.edu):  Through the lens of one of the fastest-growing international fandoms, this study explores everyday misinformation in the context of networked online environments. Findings show that fans experience a range of misinformation, similar to what we see in other political, health, or crisis contexts. However, the strong sense of community and shared purpose of the group is the basis for effective grassroots efforts and strategies to build collective resilience to misinformation, which offer a model for combating misinformation in ways that move beyond the individual context to incorporate shared community values and tactics.

Fighting Misinformation (communicatetovaccinate.com): About The Project –  Employers in the U.S. play a big role in their employees’ healthcare access and health and wellness outcomes. Employers could have an especially important influence on whether employees get the Covid-19 vaccine. Fighting Misinformation:  Acknowledge that misinformation is widespread and counter it quickly, respectfully, and concretely.

The Role of Collaboration and Sharing in the Spread of Disinformation”  As misinformation, disinformation, and conspiracy theories in crease online, so does journalism coverage of these topics. This reporting is challenging, and journalists fill gaps in their expertise by utilizing external resources, including academic researchers. This paper discusses how journalists work with researchers to report on online misinformation. Through an ethnographic study of thirty collaborations, including participant-observation and interviews with journalists and researchers, we identify five types of collaborations and describe what motivates journalists to reach out to researchers — from a lack of access to data to support the understanding of misinformation context. We highlight challenges within these collaborations, including misalignment in professional work practices, ethical guidelines, and reward structures. We end with a call to action for CHI researchers to attend to this intersection, develop ethical guidelines around supporting journalists with data at speed, and offer practical approaches for researchers playing a “data mediator” role between social media and journalists.  Author: Publications | Kate Starbird (washington.edu);  Kate Starbird | Human Centered Design & Engineering Kate Starbird, Professor of Human Centered Design & Engineering (HCDE) at the University of Washington in Seattle.

Disinformation as Adversarial Narrative Conflict: Defining it is the hardest part of combatting disinformation. Whether one works in trust and safety at a major platform, codifying new regulations as a global policymaker, or rating open web domains for disinformation risk, working from a common comprehensive definition is paramount.  But defining disinformation often feels akin to catching smoke. Add to it the complication of parochial commercial interests or even the malicious actors themselves and you get a picture of why a good definition is so difficult to pin down.  In 2019, GDI published an in-depth report called Adversarial Narratives: A New Model for Disinformation, which laid out the foundation for a new definition of disinformation, one that captures the nuance of the above examples. We consider this definition to be one of our most innovative contributions to the counter-disinformation space, and it underlies everything that we do, both using human-powered research and automation.

What’s Deepfake Bruce Willis Doing in My Metaverse? (wired.com):  For a couple days in late September, no one seemed clear on who owned Bruce Willis. The British newspaper The Telegraph claimed that the actor, who has retired because he suffers from aphasia, had digitally reincarnated his career by selling performance rights to a company called Deepcake, which used artificial intelligence technology to map Willis’ face onto another actor. Not long after, representatives of Willis said that the star of Die Hard had done no such thing and had no relationship with Deepcake, even though the company’s website had a complimentary quote from the star.

Twitter introduces a new policy for addressing misinformation during crises:  Twitter announced new content-moderation policies Thursday to crack down on misinformation related to wars, natural disasters, and other crises.  Twitter’s new “crisis misinformation policy” is rolling out globally and seeks to slow the spread of misleading information that could lead to “severe harms” during humanitarian emergencies, according to the company. The policy will start with enforcement related to the armed conflict in Ukraine, with plans to expand to other crises, such as public health emergencies and natural disasters.  Under the policy, once Twitter has evidence a claim is misleading, it will stop amplifying it across the platform. It will also “prioritize adding warning notices” to viral Tweets or those from high profile accounts and disable Likes, Retweets, and Shares for the content.

Google examines how different generations handle misinformation:  A new survey by Google shows Gen Z is better than millennials or boomers at fact-checking—but previous research tells a different story.

True Costs of Misinformation Workshop | HKS Shorenstein Center:  What are the financial, social, and human costs of misinformation? What is the price that businesses, hospitals, civil society groups, and schools pay for false or misleading information online? How can researchers support public officials and especially the communities targeted by disinformation campaigns when costing out “fake news funds” and building capacity for digital resilience? Can we put a price tag on misinformation, and if so, how, and who is responsible for paying it?

This workshop invited academics, journalists, civil society actors, and private industry leaders to engage with these questions in order to understand the true costs of misinformation, and in doing so, better inform policies on internet governance, private sector regulation, and technological innovation. We aim to expand the terms of debate in disinformation studies and bring communication and digital politics scholars in conversation with economists, climate change modeling experts, humanitarian and human rights workers, and public health scholars. By bringing together experts in adjacent fields developing impact assessment models, crisis response frameworks, auditing tools, and accountability guidelines and mechanisms, this event explored novel and creative explanatory models to study the impacts of misinformation and advances a “whole-of-society approach” (Donovan et al 2021).

The Network Structure of Online Amplification by Ryan J. Gallagher:  Social media relies on amplification. It is at the heart of how marginalized communities voice injustices, how information operations stoke long-standing racial divisions, how elected officials communicate public health guidance, how misinformation proliferates through vulnerable populations, how unlikely friends connect online, and how abusers perpetrate harassment at scale. In all of its uses, good and bad, amplification is critical to how individuals construct online communication networks.

These networks emerge from the ties that are implied by amplification: any share, retweet, or crosspost by one person forms a connection between them and the person that they are amplifying. As people amplify many different memes, stories, videos, and other content, they gradually create a network with a dense core, consisting of all those who received the most amplification. Around that core radiates a periphery, all those who amplified content from the core, giving it visibility. These two network components are interdependent: without the periphery’s amplification, there is no core; and without the core’s content, there is no periphery. So it is only by considering these two components together—the core and the periphery—that we can fully understand the structure of amplification.

What Memes Have to Do With January 6 – The Atlantic:  An excerpt from the recently published Meme Wars: The Untold Story of the Online Battles Upending Democracy in America by Joan Donovan, Emily Dreyfuss, Emily and Brian Friedberg.

Meeting | Hearings | United States Senate Committee on the Judiciary – “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds“, Tristan Harris’ and Joan Donovan’s Testimony to the Senate Judiciary Subcommittee on Privacy, Technology, and the Law.

How China, Russia and Iran amplify COVID disinformation (axios.com): China, Russia and Iran — drawing on one another’s online disinformation — amplified false theories that the COVID-19 virus originated in a U.S. bioweapons lab or was designed by Washington to weaken their countries, according to a nine-month investigation by AP and the Atlantic Council’s DFRLab.

OODAcon 2022

To register for OODAcon, go to: OODAcon 2022 – The Future of Exponential Innovation & Disruption

Ongoing efforts to combat information disorder and strengthen our cognitive infrastructure will be discussed at OODAcon 2022 – The Future of Exponential Innovation & Disruption in the context of the following panels:

  • The Disruptive Futures: Digital Self Sovereignty, Blockchain, and AI
  • Tomorrowland: A Global Threat Brief
  • Future Wars:  Beyond Cyberconflict
  • Open the Pod Bay Door – Resetting the Clock on Artificial Intelligence
  • Twenty Years of Cyber Threat Intelligence
  • Keynote Conversation with Congressman Will Hurd

OODAcon is about understanding the future and developing the resiliency to thrive and survive in an age of exponential disruption.

Society, technology, and institutions are confronting unprecedented change. The rapid acceleration of innovation, disruptive technologies and infrastructures, and new modes of network-enabled conflict require leaders to not only think outside the box but to think without the box.

The OODAcon conference series brings together the hackers, thinkers, strategists, disruptors, leaders, technologists, and creators with one foot in the future to discuss the most pressing issues of the day and provide insight into the ways technology is evolving. OODAcon is not just about understanding the future but developing the resiliency to thrive and survive in an age of disruption.

OODAcon is the next-generation event for understanding the next generation of risks and opportunities.

OODA Network Members receive a 50% discount on ticket prices. For more on network benefits and to sign up see: Join OODA Loop

Please register to attend today and be a part of the conversation.

 

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.