Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > OODA Original > Decision Intelligence > Herding, Hacks, and Hidden Bias: AI as a New Source of Financial Systemic Risk

Artificial intelligence is rapidly transforming how U.S. capital and derivatives markets operate — from trading algorithms and risk models to regulatory surveillance — raising new systemic, cybersecurity, and accountability challenges that Congress and regulators must address.

AI Oversight in Financial Markets: Emerging Policy Frontiers for the SEC and CFTC

Why This Matters

Key concerns now include third-party dependency, model bias, market herding, and AI-enabled manipulation — all of which could trigger broader systemic and cyber risks.

AI has become foundational in both capital markets and derivatives trading, used by nearly all leading financial institutions. The Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) are both exploring how to balance innovation with stability.

These Congressional Research Service (CRS) reports highlight how AI governance is moving from theoretical risk management to urgent regulatory design:

Key Points

AI Use Across Financial Systems

  • AI powers investment management, robo-advisors, fraud detection, risk modeling, and client interaction across financial institutions.
  • The CFTC reports nearly universal AI adoption across derivatives markets — 99% of major firms use AI in some capacity.

Regulatory Applications

  • The SEC’s 2024 AI Use Case Inventory lists 30 internal deployments, including detecting manipulative trades and improving comment analysis.
  • Both the SEC and CFTC are experimenting with “suptech” tools (supervisory technology) to audit AI systems.

Concentration and Third-Party Risks

  • A few large AI and cloud providers dominate infrastructure, creating single points of failure.
  • GAO and CRS warn that AI concentration could magnify financial instability if one provider experiences outages or breaches.

Market Herding and Correlation Risks

  • Firms often rely on similar AI models and training data, which can synchronize trades and amplify volatility.
  • Both reports note that “AI herding” could propagate systemic shocks during high-volatility events.

Bias, Fraud, and Manipulation

  • CRS flags AI bias as a potential amplifier of inequality in access to credit and investment.
  • In derivatives markets, AI trading bots have autonomously discovered manipulative strategies — such as price pumping — without human direction.
  • Regulators face challenges applying intent-based manipulation laws to autonomous algorithms.

Accountability and AI Washing

  • CRS highlights growing enforcement against AI misrepresentation (“AI washing”) — companies exaggerating AI use to attract investment.
  • The SEC’s AI Task Force (2025) targets both compliance innovation and enforcement clarity.

Summary of Each Source

Artificial Intelligence in Capital Markets: Policy Issues — CRS (Sept 2025)

This report examines how AI is reshaping the SEC’s regulatory perimeter. It identifies four core risk domains:

  1. Auditability and Explainability: Black-box AI models limit human oversight.
  2. Accountability: Unclear liability when autonomous systems fail.
  3. Concentration Risk: Overreliance on a few AI infrastructure providers.
  4. AI Bias and Fraud: Risks of unequal treatment and deceptive AI claims.

The SEC’s AI Task Force and roundtables in 2025 mark early attempts to coordinate AI governance. Proposed legislation, such as the Unleashing AI Innovation in Financial Services Act (H.R. 4801), envisions AI “innovation sandboxes” to balance experimentation with investor protection.

Artificial Intelligence and Derivatives Markets: Policy Issues — CRS (July 2025)

This report focuses on the CFTC’s oversight of AI in derivatives and futures trading.
It warns that:

  • Third-party cloud and AI vendors pose systemic operational risks.
  • High-speed, “out-of-the-loop” trading can cause flash crashes.
  • Large Language Models (LLMs) can both enhance liquidity and amplify bubbles.
  • Generative AI may autonomously learn market manipulation tactics, challenging enforcement frameworks.

The report suggests a need for human-in-the-loop requirements, AI-specific surveillance systems, and developer liability standards.

What Next?

  1. Regulatory Convergence: Expect closer coordination between the SEC and CFTC around cross-market AI stability frameworks.
  2. AI Liability Regimes: Congress may need to define accountability for autonomous systems that cause market harm.
  3. Standardized AI Testing: New mandates could require AI model validation, audit trails, and disclosure protocols for trading algorithms.
  4. Market Infrastructure Resilience: GAO and Treasury may expand oversight of AI concentration and third-party dependencies.

Recommendations from the CRS Reports

  • Adopt “Human-in-the-Loop” Protocols: Require manual oversight for AI trading thresholds and critical decision points.
  • Enhance Model Transparency: Mandate explainability documentation and independent audits of financial AI systems.
  • Expand AI Literacy for Regulators: Implement OMB’s M-25-21 guidance for an “AI-ready federal workforce.”
  • Coordinate Across Jurisdictions: Align with IOSCO, IMF, and FSB recommendations on AI systemic risk management.
  • Enforce Anti–AI-Washing Measures: The SEC should continue monitoring misleading claims about AI capabilities.

Additional OODA Loop Resources

More On Our 2025 OODA Loop Research Series: The Macroeconomics of AI in 2025

By combining quantitative economic analysis with real-world business applications, this series aims to provide decision-makers with actionable insights on how AI is shaping the global economy and where the real opportunities and risks lie.

The OODA Loop Macroeconomics of AI in 2025 series of posts is a market-driven, enterprise-focused quantitative series of posts that aims to provide a data-driven perspective on AI’s macroeconomic impact. It seeks to move beyond the hype cycle by grounding discussions in concrete economic metrics and empirical analysis. The series examines global AI development, adoption trends, and economic maturation, analyzing real-world use cases and case studies to assess AI’s contributions to GDP and productivity growth.

A key analytical framework used is Jeffrey Ding’s Tech Diffusion Model, which helps measure AI adoption rates and integration across industries. The series also incorporates extensive research reports and white papers (from organizations like the CSET, Stanford’s HAI, and the NBER, amongst other trusted sources), including research-based insights on AI’s role in workforce automation, augmentation, and replacement metrics. The focus is on understanding AI-driven economic acceleration, growth, diffusion, and productivity shifts while addressing policy imperatives to ensure balanced and sustainable AI integration.

From The Macroeconomics of AI in 2025 Series

The Macroeconomics of AI in 2025: This post explores the uncertain macroeconomic impacts of artificial intelligence (AI). While AI holds the promise of enhancing productivity and spurring economic growth, concerns remain about potential job displacement, intensifying global competition, the rapid acceleration and diffusion of AI technologies, and the possibility of misleading economic indicators. The analysis underscores the need for a nuanced understanding of AI’s multifaceted effects on the global economy.

A New Economic Paradigm: Transformative AI Diffused Across Industry Sectors: This post discusses how transformative AI is reshaping various industries, compelling businesses to reconsider their value creation and delivery methods. It emphasizes the necessity for companies to adapt their business models and value propositions to remain competitive in this evolving economic landscape. The post delves into how AI redefines growth, labor dynamics, innovation processes, value creation, and policy frameworks, signaling a shift towards a new economic paradigm driven by AI integration across sectors.

The Stanford Institute for Human-Centered AI (HAI): Annual Macroeconomic AI Trends and ImpactsSince 2017, The Stanford Institute for Human-Centered AI (HAI) has produced one of the best-in-class annual studies on macro-level trends and impacts of Artificial Intelligence. The HAI AI Index 2024 frames the key shifts over the last year, providing a critical outlook for decision-makers navigating the economic transformation induced by the diffusion of AI across industry sectors.

The Macroeconomic Impact and Security Risks of AI in the Cloud: The rapid expansion of artificial intelligence in cloud environments is reshaping industries, but security vulnerabilities are mounting just as fast. The “State of AI in the Cloud 2025” report by Wiz Research highlights the rapid integration of artificial intelligence (AI) into cloud environments, emphasizing both opportunities and challenges. The report underscores the necessity for organizations to explore innovation coupled with robust security and governance frameworks. As AI becomes an indispensable part of cloud operations, businesses must strike a balance between innovation and protection to navigate this accelerating infrastructure build-out.

Cross-Border Data Flows and the Economic Implications of Data Regulation: Any macroeconomic consideration of the impacts of artificial intelligence must also include the future of data writ large: in this era of exponential disruption and acceleration, the flow of data across borders has become the backbone of global trade and economic activity. Currently, regulatory interventions, driven by concerns over privacy, security, and national interest, are reshaping this landscape. A recent report sheds light on the economic trade-offs of data regulation, offering empirical evidence to inform business decisions.

The Economic Transformation of Generative AI: Key Insights on Productivity and Job Creation: If there is any macroeconomic impact of artificial intelligence we have been looking to frame (as part of this OODA Loop Macroeconomics of AI in 2025 series), it is this: “Fears of large-scale technological unemployment are probably overblown. The history of general-purpose technologies shows that the growth they bring is accompanied by a strong demand for labor. However, this increased demand is often in new occupations. For example, more than 85% of total U.S. employment growth since 1940 has come in entirely new occupations.” In this post: find further valuable insights from MIT Professor Andrew McAfee’s report Generally Faster: The Economic Impact of Generative AI (produced while Andrew was the inaugural Technology & Society Visiting Fellow at Google Research in 2024). Generative AI could enable nearly 80% of U.S. workers to complete at least 10% of their tasks twice as fast without quality loss, creating substantial economic opportunities and reshaping the labor market.

The AI Acceleration of Moore’s Law: Actionable “Long Task” Performance Metrics for Your Business Strategy: AI is evolving faster than expected: researchers at METR have discovered that AI’s ability to autonomously complete tasks has been doubling every 7 months since 2019, mirroring Moore’s Law. If this trend holds, AI could independently execute month-long human-equivalent projects by 2030, transforming automation, workforce dynamics, and business strategy.

Big Tech’s Cloud Dominance: Fueling the AI Arms Race: In the rapidly evolving landscape of artificial intelligence (AI), cloud computing has emerged as the linchpin of Big Tech’s strategy to dominate the AI frontier. As companies like Microsoft, Amazon, and Google pour billions into AI development, their expansive cloud infrastructures are not just supporting this growth – they’re driving it.

Managing AI’s Economic Future: Strategic Automation in a World of Uncertainty: RAND’s roadmap for AI-driven economic policy confronts the high-stakes trade-offs of growth, inequality, and global competition.

Mapping the AI Economy: Task-Level Insights from Millions of Claude Conversations: In a recent seminar at the Stanford Digital Economy Lab, Alex Tamkin of Anthropic presented findings from an analysis of over four million Claude conversations to reveal how AI is currently used in real-world economic tasks. The study identifies where and how AI tools like Claude augment or automate work by mapping AI activity to the U.S. Labor Department’s O*NET job database.

Thriving in a Post-Labor Economy of AI and Automation : Explores how organizations, governments, and individuals can proactively adapt to the accelerating displacement and augmentation of human labor by AI systems. The article frames the post-labor economy as a strategic opportunity rather than a threat (emphasizing the need for new educational paradigms, universal digital infrastructure, and institutional agility to manage the transition).

Notable Voices on The Post Labor Economy: Things will be better, faster, cheaper, and safer: A chorus of techno-optimist perspectives from leading thinkers like Marc Andreessen, Sam Altman, and Balaji Srinivasan, argues that the rise of AI and automation will usher in a future of abundance, efficiency, and safety. These voices suggest that as AI offloads more labor, human creativity and well-being will flourish, not diminish. This divergence underscores a critical tension: between public techno-utopian narratives and the quiet, risk-averse signals emerging from the boardrooms of America’s largest companies.

When AI Becomes a Material Risk Class: What the S&P 500’s AI Disclosures Reveal About Executive Risk Perception: The Autonomy Institute’s new report reveals a sharp rise in AI-related risks disclosed by S&P 500 companies, signaling a pivotal shift in corporate awareness, regulatory exposure, and competitive threat perception. This analysis reveals how generative AI is reshaping corporate risk disclosures (and where systemic threats to the U.S. economy may emerge next).

From Cloud to Country: The OECD, Google, and Microsoft on the Global Compute and AI Diffusion Landscape: AI’s diffusion is now inseparable from its compute infrastructure. 2025 marks the year when nations and hyperscalers began measuring, monetizing, and governing “AI compute” as a strategic asset, linking sovereign cloud capacity, public-private infrastructure build-outs, and the spread of AI capabilities across sectors and economies.

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.