A recent OODA Network monthly meeting underscored the strategic convergence of AI, quantum, and autonomous defense systems, driving both opportunity and risk. Follow-on OODA Loop research thematics have emerged from that network discussion, including agentic AI governance, Post-Quantum Cryptography deployment, and AI Infrastructure Sovereignty, with an enterprise-facing focus on fast-tracking security readiness, policy modernization, and dual-use innovation scouting to maintain competitive advantage.
Adversarial AI defenses are also a vital area of interest: the AI security landscape is shifting rapidly as AI becomes both a defense multiplier and an attack amplifier – which is where we begin in this post with recent perspectives offered by research from OpenAI Global Affairs and The Institute for AI Policy and Strategy (IAPS), with an emphasis on adversarial red teaming, differential access frameworks, and global threat intelligence on malicious AI use.
Summary
The May 2025 OODA Network meeting and recent reports from OpenAI Global Affairs and IAPS highlight the urgent need to integrate AI into defense operations. AI systems now enable near-machine-speed cyberattacks, disinformation campaigns, and malware development. In response, frameworks such as differential access aim to prioritize defenders by tailoring AI cybersecurity capabilities to key actors while limiting offensive misuse.
Why This Matters
Without strategic governance, policy modernization, and defender-focused access controls, AI-enabled threats could outpace current security models.
- AI is accelerating both attack and defense capabilities, shifting the cyber OODA loop to machine speed.
- Malicious uses range from deepfake-driven influence operations to AI-developed malware and scams.
Key Points
- Dual-use AI: AI amplifies defense (e.g., automated detection) but is also weaponized for scalable attacks, including malware development (ScopeCreep) and deception operations (Sneer Review, High Five, VAGue Focus).
- Adversarial AI threats: Real-world exploits include AI-generated malicious code, synthetic identity fraud, and social engineering at scale.
- Differential Access Frameworks: IAPS proposes Promote, Manage, and Deny-by-Default approaches to prioritize defenders while restricting attackers’ access to powerful AI cybersecurity capabilities.
- Strategic challenges: Security poverty lines, agentic AI risks, and policy misalignment remain major obstacles to scalable AI defense.
- Recommended actions: Launch AI red-teaming, review AI coding tools for backdoors, and create clear AI deployment policies.
What Next?
- AI Red Teaming: Immediate deployment across national defense and critical infrastructure.
- Policy Modernization: Update AI security policies to reflect adversarial and dual-use realities.
- Differential Access Implementation: Operationalize frameworks to ensure keystone defenders and low-maturity critical actors gain priority AI cybersecurity capabilities.
- Cross-sector Coordination: Align foundation model developers, governments, and cybersecurity vendors to implement scalable, secure differential access schemes.
Recommendations from the Research
- Establish AIxCyber differential access programs tailored to organizational risk profiles and strategic roles.
- Prioritize adversarial testing as a service for critical infrastructure operators to emulate nation-state-level attacks safely.
- Develop plain-language AI use policies for government and contractors to ensure rapid adoption with governance clarity.
- Expand red teaming beyond code security to include AI-enabled social engineering and influence operations defenses.
- Support security research with managed access schemes, enabling white-hat hackers and academic researchers to enhance open-source security while limiting exploit proliferation.
To access the full reports by the IAPS and Open AI, see:
Disrupting Malicious Uses of AI: June 2025 (OpenAI Global Affairs)
Provides case studies of AI-enabled malicious operations including malware development (ScopeCreep), social engineering (VAGue Focus), deception campaigns (Sneer Review, High Five), and scams (Wrong Number), with threat actor origins spanning China, Russia, North Korea, Iran, Cambodia, and the Philippines.
Differential Access for AIxCyber (IAPS)
Proposes a strategic framework for managing AI cyber capability access, with Promote Access, Manage Access, and Deny-by-Default approaches to tilt the AI offense-defense balance toward defenders while mitigating misuse risks.
About the Author
Daniel Pereira
Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.
Subscribe to OODA Daily Pulse
The OODA Daily Pulse Report provides a detailed summary of the top cybersecurity, technology, and global risk stories of the day.