AI, machine learning, and data science will be used to create some of the most compelling technological advancements of the next decade. The OODA team will continue to expand our reporting on AI issues.
Recent OODA analysis on Artificial Intelligence:
-
AI’s diffusion is now inseparable from its compute infrastructure. 2025 marks the year when nations and hyperscalers began measuring, monetizing, and governing “AI compute” as a strategic asset, linking sovereign cloud capacity, public-private infrastructure build-outs, and the spread of AI capabilities across sectors and economies.
-
In my recent invited Congressional expert witness testimony before the U.S. House Judiciary Committee’s Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet, I emphasized that the United States is experiencing multiple simultaneous tech revolutions beyond just AI. As we navigate advances in space tech, biotech, quantum tech, and the miniaturization of sensors and…
-
Artificial intelligence is rapidly transforming how U.S. capital and derivatives markets operate — from trading algorithms and risk models to regulatory surveillance — raising new systemic, cybersecurity, and accountability challenges that Congress and regulators must address.
-
Alphabet recently released the details of X Development’s Project Suncatcher – an audacious vision for scaling artificial intelligence (AI) infrastructure into orbit by coupling solar-powered satellites equipped with Tensor Processing Units (TPUs) and free-space optical networks.
-
Consistent with the substance, tone and tenor of presentations at OODAcon 2025 last week, Secretary Hegseth earlier today announced a major acquisition reform effort designed to put rapid acquisition of technology at the center of Pentagon operations and strategic policy.
-
Dartmouth’s Institute for Security, Technology, and Society (ISTS) convened thirty experts from government, industry, academia, and venture capital to assess how the private sector currently supports U.S. offensive cyber operations—and to propose ways to scale this partnership responsibly.
-
This summer at DEFCON 33, DARPA’s 2025 AI Cyber Challenge showcased how generative and agentic AI can autonomously find and fix vulnerabilities in complex software systems.
-
A new body of reports reveals how Beijing’s state-backed networks are embedding control across semiconductors, software, and infrastructure, transforming commercial interdependence into strategic dominance.
-
The upcoming OODAcon 2025 session “Securing Internet First Principles; Access, Privacy, and Expression in the Age of Disruptive Technology”—a fireside chat with SnowStorm CEO Serene—arrives at a critical inflection point for global digital governance.
-
IBM and Anthropic are pushing forward a new standard for enterprise-grade AI with the Model Content Protocol (MCP), an emerging framework for securely connecting AI agents with enterprise data and applications.
-
The new front in great power competition is not cyber or space, but cognition. Beijing and Moscow are systematically weaponizing perception – deploying artificial intelligence, disinformation, and psychological operations to erode trust in institutions, fracture alliances, and weaken democratic resolve from within.
-
Russia’s covert “shadow fleet” of tankers and the global resurgence of piracy mark a new phase in maritime competition. From the Baltic to the Caribbean, gray-zone logistics, economic warfare, and criminal opportunism are converging – turning the world’s sea lanes into a testbed for deterrence in the age of AI surveillance.
-
The transformative potential of exponential technologies – such as AI, quantum computing, robotics, additive manufacturing, and synthetic biology – depends less on breakthroughs in the lab than on whether society can mobilize the necessary power, capital, and skilled labor to deploy them at scale.
-
A synthesis of insights from books, reports, and expert dialogues (that we have included in previous OODA Loop analysis or were in our research queue) all point to a central theme: enterprises should strategically approach generative AI and agentic AI systems given their rapid evolution, emergent behaviors, and risks of architectural lock-in.
-
Agentic AI is evolving from experimental tools into self-directed, networked ecosystems that promise to reshape how enterprises operate, innovate, and compete. Recent impressive, exhaustive articles on the role of the agentic mesh architecture in this Agentic AI-driven future of the enterprise are the foundation of this analysis.
-
The Askwith Forum at the Harvard Graduate School of Education (HGSE) recently engaged in a discussion of the future of education and “thinking” – with origins in work done by the U.S. IC on why certain superforecasters consistently outperformed experts in predicting geopolitical events.
-
The Future of Privacy Forum (FPF) has released a new report capturing how companies are approaching risk assessments for artificial intelligence.
-
AI as a Service (AIaaS) isn’t just “the next SaaS”; it’s the only viable way enterprises are going to bring AI into their environments at scale.
-
As generative AI becomes embedded in national security, healthcare, and critical infrastructure, red-teaming is rapidly becoming a frontline strategy for evaluating model risk and resilience. A new Software Engineering Institute (SEI) study urges business leaders to learn from decades of experience in cybersecurity red-teaming to overcome current weaknesses in threat modeling, tooling, and impact assessment.
-
As organizations accelerate their journey toward AI-driven productivity, audit frameworks to assess which functions are suitable for LLM automation have emerged as critical tools for managing risk, scaling solutions, and aligning strategy.
-
A little over a year ago, in July of 2024, the massive CrowdStrike outage showed how a single vendor’s content update can ripple into global operational risk – exposing dependencies in platform design, vendor concentration, and incident governance. One year on, platform owners and regulators are moving toward architectural guardrails and reporting harmonization, but enterprises…
-
When a totalitarian government like the Chinese Communist Party’s (CCP) People’s Republic of China (PRC) issues “opinions” from the State Council, these should not be mistaken for optional policy musings. In practice, they function as directive roadmaps. Compliance is expected across every level of society, from ministries to enterprises to research institutions, and this guidance…
-
-
A new CSIS report by Emily Benson Unger and Alexander McLean warns that the future of AI will be shaped as much in Lagos and Nairobi as in Silicon Valley. CSIS argues that the next phase of AI diffusion will be decided as much in Lagos and Nairobi as in Silicon Valley and Shenzhen: open-source…
-
As artificial intelligence continues to transform all aspects of our economy, business leaders from every industry have been seeking ways to improve competitiveness and take advantage of the business value of these new technologies. Some of the most impactful variants of AI are those known as Agentic AI. This approach, which involves the use of…
-
Machines are evolving from being our tools to our teammates. And our youngest are growing up with this! They are AI Natives!
OODA Loop Analysis
The Executive’s Guide To Artificial Intelligence: What you need to know about what really works and what comes next – The megatrend of Artificial Intelligence is transforming the algorithms of business in exciting ways.
OpenAI CEO Sam Altman Testifies on “Oversight of A.I.: Rules for Artificial Intelligence” – Fast on the heels of his May 4th meeting at the White House with Vice President Kamala Harris and other top administration officials to discuss responsible AI innovation, OpenAI CEO Sam Altman
NIST Makes Available the Voluntary Artificial Intelligence Risk Management Framework (AI RMF 1.0) and the AI RMF Playbook – NIST’s Artificial Intelligence Risk Management Framework (AI RMF 1.0) “The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has released its Artificial Intelligence
Using Artificial Intelligence For Competitive Advantage in Business – AI technologies are making continuous advances in domains like industrial robotics, logistics, speech recognition and translation, banking, medicine and advanced scientific research. But in almost every case, the cutting edge AI that drives the advances drops from attention, becoming almost invisible when it becomes part of the overall system.
The Future of Enterprise Artificial Intelligence – A short overview of the business aspects of AI focused on informing decisions including due diligence.
Opportunities for Advantage: Measuring Trends in Artificial Intelligence – A summary post of major trends.
AI Security: Four Things to Focus on Right Now – This is the only security framework we have seen that helps prevent AI issues before they develop
When Artificial Intelligence Goes Wrong – By studying issues we can help mitigate them
The Future of AI Policy is Largely Unwritten – Congressman Will Hurd provides insight on the emerging technologies of AI and Machine Learning.
AI Will Test American Values In The Battlefield – How will military leaders deal with AI that may treat troops as expendable assets to win the “game”.
The AI Capabilities DoD Says They Need The Most – Savvy businesses will pay attention to what this major customer wants.
What Leaders Need to Know About the State of Natural Language Processing – Major improvements in the ability of computers to understand what humans write, say and search are being fielded. These improvements are significant, and will end up changing just about every industry in the world. But at this point they are getting little notice outside a narrow segment of experts.
NATO and US DoD AI Strategies Align with over 80 International Declarations on AI Ethics – NATO’s release in October of its first-ever strategy for artificial intelligence is primarily concerned with the impact AI will have on the NATO core commitments of collective defense, crisis management, and cooperative security. Worth a deeper dive is a framework within the overall NATO AI Strategy, which mirrors that of the DoD Joint Artificial Intelligence Center’s (JAIC) efforts to establish norms around AI: “NATO establishes standards of responsible use of AI technologies, in accordance with international law and NATO’s values.” At the center of the NATO AI strategy are the following six principles: Lawfulness, Responsibility and Accountability, Explainability and Traceability, Reliability, Governability, and Bias Mitigation.”
“AI Accidents” framework from the Georgetown University CSET – The Center for Security and Emerging Technology) (CSET) in a July 2021 policy brief, “AI Accidents: An Emerging Threat – What Could Happen and What to Do,” makes a noteworthy contribution to current efforts by governmental entities, industry, AI think tanks and academia to “name and frame” the critical issues surrounding AI risk probability and impact. For the current enterprise, as we pointed out as early as 2019 in Securing AI – Four Areas to Focus on Right Now, the fact still remains that “having a robust AI security strategy is a precursor that positions the enterprise to address these critical AI issues.” In addition, enterprises which have adopted and deployed AI systems also need to commit to the systematic logging and analysis of AI-related accidents and incidents.
DHS Science and Technology Directorate (S&T) releases Artificial Intelligence (AI) and Machine Learning (ML) Strategic Plan Amidst Flurry of USG-wide AI/ML RFIs – An artificial intelligence security strategy (see “Securing AI – Four Areas to Focus on Right Now”) should be the cornerstone of any AI and machine learning (ML) efforts within your enterprise. We also recently outlined the need for enterprises to further operationalize the logging and analysis of artificial intelligence (AI) related accidents and incidents based on an “AI Accidents” framework from the Georgetown University CSET. The best analysis is a sophisticated body of work on AI-related issues of morality, ethics, fairness, explainable and interpretable AI, bias, privacy, adversarial behaviors, trust, fairness, evaluation, testing and compliance.
AI-Based Ambient Intelligence Innovation in Healthcare and the Future of Public Safety – Disaster conditions will clearly be more impactful and more frequent due to the impact of climate change. The domestic terrorism threat stateside is becoming a constant, with the impact and frequency of growing domestic U.S. political instability and public safety incidents to be determined. We will need systems that are monitoring these temporal, ephemeral ecosystems and providing insights and recommendations for real-time decision-making support and situational awareness analysis. What can AI-Based Ambient Intelligence Innovation in Healthcare teach us?
The Future of War, Information, AI Systems and Intelligence Analysis – The U.S. is in a struggle to maintain its dominance in air, land, sea, space, and cyberspace over countries with capabilities increasingly on par in all domains with that of the U.S. In addition, information (in all its forms) is the center of gravity of a broad set of challenges faced by the United States. Information, then, is the clear strategic vector of value creation for the emergence of applied technologies to enable operational innovation. For the U.S., the desired outcome is continued dominance for another American Century. For the Chinese, military capabilities usher in the dawn of a new technological superiority and, as a result, geopolitical and military dominance on the world stage.
Katharina McFarland on Winning in the Age of Artificial Intelligence – Katharina McFarland has led change in a wide array of national security domains including Space, Missile Defense, Acquisition and Nuclear Posture. She is a former Assistant Secretary of Defense