Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > GPT-3, Neural Language Models and The Risks of Radicalization

Background

In 2020, OpenAI developed GPT-3, a neural language model that is capable of sophisticated natural language generation and completion of tasks like classification, question-answering, and summarization. While OpenAI has not open-sourced the model’s code or pre-trained weights at the time of writing, it has built an API to experiment with the model’s capacity. When we researched OpenAI’s predecessor model, GPT-2, last year, we found that language models have the potential of being used as potent generators of ideologically consistent extremist content.

In a report released in 2020 and made possible by the OpenAI API Academic Access Program, The Middlebury Institute of International Studies – Center on Terrorism, Extremism, and Counterterrorism (CTEC) evaluated the revolutionary improvements of GPT-3 for the risk of weaponization by extremists who may attempt to use GPT-3 or hypothetical unregulated models to amplify their ideologies and recruit to their communities.

Experimenting with prompts representative of different types of extremist narratives, structures of social interaction, and radical ideologies, the CTEC researchers found:

  • GPT-3 demonstrates significant improvement over its predecessor, GPT-2, in generating extremist texts.
  • GPT-3 shows strength in generating text that accurately emulates interactive, informational, and influential content that could be utilized for radicalizing individuals into violent far-right extremist ideologies and behaviors.
  • While OpenAI’s preventative measures are strong, the possibility of unregulated copycat technology represents a significant risk for large-scale online radicalization and recruitment. In the absence of safeguards, successful and efficient weaponization that requires little experimentation is likely.
  • AI stakeholders, the policymaking community, and governments should begin investing as soon as possible in building social norms, public policy, and educational initiatives to preempt an influx of machine-generated disinformation and propaganda. Mitigation will require effective policy and partnerships across industry, government, and civil society. (1)

What Next?

The report makes the following recommendations:

  • The application of strong and consistent safeguards and restrictions imposed by the creators and distributors of
    powerful language generation models.
  • Promotion of critical digital literacy, awareness of synthetic text, and automated distribution of online content through media campaigns and other forms of educational outreach through government and civil society groups. Researchers and providers could consider contributing to and investing in educational programming designed for mass audiences.
  • Deployment of detection models by service providers and civil society to reduce the distribution of and shine a
    spotlight on nefarious synthetic content within online platforms.

    • Researchers and providers of consumer-facing language model technology could invest in easy-to-use,
      easy-to-understand detection and filtration systems that are integrated with, and delivered alongside any
      publicly accessible language-generation platform. Such safeguard development would ideally occur
      alongside research on the language models themselves.
  • Support for normative changes within online communities that include valuing identified and verified sources of information, especially within interactive platforms. Partnerships among industry, government and civil society are integral to building and strengthening norms.
    • Coordination with mainstream social media platforms would enable more robust disclosure of the risks
      of synthetic content to their users.
  • Advocacy, similar to what has been seen in relation to the deployment of facial recognition technology, to demand responsible and transparent application of these models.
    • Improvement in advocacy could come from the formal expansion of partnerships between AI research organizations and civil society groups, with an emphasis on public-facing advocacy. Without a concerted effort to target mainstream audiences, AI safety and ethics discussions can remain restricted to niche and elite communities.
  • Further study of GPT-3 and follow-on models is merited. Evaluating the development and deployment potential of wholly synthetic extremist content in the absence of significant training, funding, or organizational support would more fully characterize the threat of full access to copycat GPT-3 models in the future.
  • Successful weaponization could include wholesale production of content used to synthetically populate interactive platforms such as forums and message boards, with minimal need for human curation of synthetic content, and testing and full evaluation of such content is recommended. In addition, the deployment of a survey to determine the believability and efficacy of synthetic content across multiple platform types would further refine the threat potential of language models like GPT-3. (2)

For the Full Report: The_Radicalization_Risks_of_GPT_3_and_Advanced_Neural_Language_Models

Stay Informed

It should go without saying that tracking threats are critical to inform your actions. This includes reading our OODA Daily Pulse, which will give you insights into the nature of the threat and risks to business operations.

Related Reading:

Explore OODA Research and Analysis

Use OODA Loop to improve your decision-making in any competitive endeavor. Explore OODA Loop

Decision Intelligence

The greatest determinant of your success will be the quality of your decisions. We examine frameworks for understanding and reducing risk while enabling opportunities. Topics include Black Swans, Gray Rhinos, Foresight, Strategy, Strategies, Business Intelligence, and Intelligent Enterprises. Leadership in the modern age is also a key topic in this domain. Explore Decision Intelligence

Disruptive/Exponential Technology

We track the rapidly changing world of technology with a focus on what leaders need to know to improve decision-making. The future of tech is being created now and we provide insights that enable optimized action based on the future of tech. We provide deep insights into Artificial Intelligence, Machine Learning, Cloud Computing, Quantum Computing, Security Technology, and Space Technology. Explore Disruptive/Exponential Tech

Security and Resiliency

Security and resiliency topics include geopolitical and cyber risk, cyber conflict, cyber diplomacy, cybersecurity, nation-state conflict, non-nation-state conflict, global health, international crime, supply chain, and terrorism. Explore Security and Resiliency

Community

The OODA community includes a broad group of decision-makers, analysts, entrepreneurs, government leaders, and tech creators. Interact with and learn from your peers via online monthly meetings, OODA Salons, the OODAcast, in-person conferences, and an online forum. For the most sensitive discussions interact with executive leaders via a closed Wickr channel. The community also has access to a member-only video library. Explore The OODA Community.

Tagged: AI
Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.