Start your day with intelligence. Get The OODA Daily Pulse.
The Autonomy Institute’s new report reveals a sharp rise in AI-related risks disclosed by S&P 500 companies, signaling a pivotal shift in corporate awareness, regulatory exposure, and competitive threat perception. This analysis reveals how generative AI is reshaping corporate risk disclosures (and where systemic threats to the U.S. economy may emerge next).
As AI Risk Disclosures Surge Across the S&P 500, this post includes a top-level analysis of what boards, business leaders, and investors need to know. Specifically: 76% of S&P 500 firms now cite AI as a material risk, marking a steep rise in only one year.
This analysis offers a comprehensive review of how AI is transforming corporate risk profiles. It signals that generative AI is no longer just a strategic asset. It’s now a systemic risk. The findings provide early warning indicators for regulators, investors, and executive teams who must align risk governance frameworks with an accelerating and opaque technological landscape.
Disillusionment Emerges…57 companies disclosed the risk that AI may not deliver ROI or operational benefits.
For the full report, see: “Their Capital At Risk: The Rise of AI as a Threat to the S&P 500” (Autonomy Institute, July 2025)
CISOs and CIOs must audit exposure to third-party AI vendors and data flows, particularly around sensitive IP and customer data.
The Macroeconomic Effects of Artificial Intelligence (CRS, Jan 2025) – A policy primer that outlines AI’s potential impact on labor, productivity, and inequality. Highlights include slow diffusion, mild labor displacement to date, and uncertain long-term GDP effects (0.9% over 10 years per some estimates). This Congressional Research Service (CRS) provides a complementary macroeconomic perspective on AI’s impact, highlighting how adoption remains modest but is expected to grow steadily over time. Unlike the Autonomy Institute’s firm-level analysis of AI as a direct risk vector in corporate filings, CRS frames AI more as a long-term growth driver with uncertain near-term effects.
The Simple Macroeconomics of AI (NBER, May 2024) – Daron Acemoglu argues AI’s macro impact may be modest unless it enables the creation of new labor-augmenting tasks. He estimates a 0.53% TFP gain over a decade, warns of potential negative welfare impacts from “bad new tasks” (e.g., deepfakes), and sees widening capital-labor inequality without structural intervention. This research offers a high-level theoretical framework that complements the Autonomy Institute’s granular analysis of S&P 500 risk disclosures. While Autonomy investigates how firms are documenting AI as a risk, this NBER study focuses on how AI could function as a general-purpose technology (GPT), influencing economic growth and productivity over time.
AI Risk Has Become Ubiquitous in Corporate America
The analysis of 10-K filings shows that 3 in 4 S&P 500 companies (380 in total) added or expanded upon artificial intelligence-related risks in the past year. This marks a dramatic and coordinated shift in how corporate leadership views the AI landscape: not just as an opportunity but as a potential liability. These are not speculative remarks; 10-K disclosures are legal instruments meant to limit litigation exposure, which means these risk acknowledgments reflect high-confidence internal concerns.
Malicious Use of AI Is a Growing Threat Vector
One-third of the companies (193 in total) now flag the risk that bad actors could use AI systems to manipulate markets, impersonate executives, commit fraud, or breach digital perimeters. These warnings signal that firms are increasingly experiencing or anticipating real-world AI-enabled adversarial behavior. In sectors like finance, defense, and tech, AI is no longer just a competitive tool – it is a weapon in the hands of threat actors.
Deepfake Mentions Are Rising Rapidly
The number of firms explicitly citing “deepfakes” in their disclosures jumped from 16 to 40 in one year. This trend aligns with concerns about corporate impersonation, fake news impacting stock prices, and manipulated audio/video targeting brand trust or executive credibility. The doubling of disclosures in this domain suggests growing boardroom awareness of how AI-generated synthetic media could directly harm reputation and market performance.
Data Leakage and IP Exposure Through AI Interactions
Nearly 1 in 5 companies (95 total) cited the risk that proprietary or sensitive data could be inadvertently exposed via interaction with AI models (particularly those hosted by third parties). As employees across industries increasingly use tools like ChatGPT or Copilot, companies are becoming aware that AI can absorb sensitive prompts into retrainable data, creating permanent and potentially uncontrolled exposure of trade secrets and customer data.
Vendor Dependency on AI Providers as a Structural Weakness
Roughly 1 in 10 companies (56 total) warned of over-reliance on third-party AI model providers such as OpenAI, Anthropic, and other API-based services. These disclosures often cited concerns about uptime, model updates, security risks, contractual opacity, and the inability to audit model outputs. This signals a growing awareness that corporate digital infrastructure is becoming entangled with fast-moving, often opaque AI startups.
Energy Demands from AI Are Emerging as a Material Infrastructure Risk
Among utilities firms, 1 in 3 (10 of 30) now reference the strain that AI-related data center demand is placing on power grids and long-term infrastructure planning. Several utilities highlight the sudden surge in requests for grid interconnects and hyperscale energy draw from AI-linked real estate developments – posing operational, regulatory, and capital allocation risks in already-stressed regions.
Regulatory Burdens Are Escalating, Particularly in the EU
The number of companies citing the EU AI Act as a regulatory risk has tripled year-over-year, from 21 to 67. These references suggest that compliance costs, audit mandates, and restrictions on certain AI systems are no longer just European issues; they’re becoming a drag on multinational innovation timelines and legal risk calculations, especially for companies operating in both U.S. and EU jurisdictions.
Export Control Risks Are Now Material for AI Hardware Vendors
Seven major technology companies disclosed risk exposure to U.S. export control measures targeting AI-enabling technologies – especially advanced chips and compute clusters destined for China. These disclosures warn of lost revenue, retaliatory trade action, and global supply chain realignment. This reinforces that AI is not just a commercial opportunity. It’s now tightly entangled with geopolitical maneuvering.
AI Competition Is a Strategic Imperative; But Not Without Risk
Roughly one-third of firms (168 total) acknowledged AI as a competitive differentiator, expressing concern that failing to keep up with AI innovation could lead to loss of market share. At the same time, 11% of companies explicitly warned that AI investment may not produce expected returns (suggesting a growing risk of overinvestment or premature deployment).
Employment Risks Remain Minimally Addressed in Filings
In stark contrast to public debate about job displacement, only six companies mention labor impacts or workforce transformation in their 10-Ks. This indicates a strategic blind spot or reputational hesitation to acknowledge the destabilizing labor effects of AI. It also suggests that corporate America may be underestimating or delaying acknowledgment of one of the most socially visible risks.
By combining quantitative economic analysis with real-world business applications, this series aims to provide decision-makers with actionable insights on how AI is shaping the global economy and where the real opportunities and risks lie.
The OODA Loop Macroeconomics of AI in 2025 series of posts is a market-driven, enterprise-focused quantitative series of posts that aims to provide a data-driven perspective on AI’s macroeconomic impact. It seeks to move beyond the hype cycle by grounding discussions in concrete economic metrics and empirical analysis. The series examines global AI development, adoption trends, and economic maturation, analyzing real-world use cases and case studies to assess AI’s contributions to GDP and productivity growth.
A key analytical framework used is Jeffrey Ding’s Tech Diffusion Model, which helps measure AI adoption rates and integration across industries. The series also incorporates extensive research reports and white papers (from organizations like the CSET, Stanford’s HAI, and the NBER, amongst other trusted sources), including research-based insights on AI’s role in workforce automation, augmentation, and replacement metrics. The focus is on understanding AI-driven economic acceleration, growth, diffusion, and productivity shifts while addressing policy imperatives to ensure balanced and sustainable AI integration.
The Macroeconomics of AI in 2025: This post explores the uncertain macroeconomic impacts of artificial intelligence (AI). While AI holds the promise of enhancing productivity and spurring economic growth, concerns remain about potential job displacement, intensifying global competition, the rapid acceleration and diffusion of AI technologies, and the possibility of misleading economic indicators. The analysis underscores the need for a nuanced understanding of AI’s multifaceted effects on the global economy.
A New Economic Paradigm: Transformative AI Diffused Across Industry Sectors: This post discusses how transformative AI is reshaping various industries, compelling businesses to reconsider their value creation and delivery methods. It emphasizes the necessity for companies to adapt their business models and value propositions to remain competitive in this evolving economic landscape. The post delves into how AI redefines growth, labor dynamics, innovation processes, value creation, and policy frameworks, signaling a shift towards a new economic paradigm driven by AI integration across sectors.
The Stanford Institute for Human-Centered AI (HAI): Annual Macroeconomic AI Trends and Impacts: Since 2017, The Stanford Institute for Human-Centered AI (HAI) has produced one of the best-in-class annual studies on macro-level trends and impacts of Artificial Intelligence. The HAI AI Index 2024 frames the key shifts over the last year, providing a critical outlook for decision-makers navigating the economic transformation induced by the diffusion of AI across industry sectors.
The Macroeconomic Impact and Security Risks of AI in the Cloud: The rapid expansion of artificial intelligence in cloud environments is reshaping industries, but security vulnerabilities are mounting just as fast. The “State of AI in the Cloud 2025” report by Wiz Research highlights the rapid integration of artificial intelligence (AI) into cloud environments, emphasizing both opportunities and challenges. The report underscores the necessity for organizations to explore innovation coupled with robust security and governance frameworks. As AI becomes an indispensable part of cloud operations, businesses must strike a balance between innovation and protection to navigate this accelerating infrastructure build-out.
Cross-Border Data Flows and the Economic Implications of Data Regulation: Any macroeconomic consideration of the impacts of artificial intelligence must also include the future of data writ large: in this era of exponential disruption and acceleration, the flow of data across borders has become the backbone of global trade and economic activity. Currently, regulatory interventions, driven by concerns over privacy, security, and national interest, are reshaping this landscape. A recent report sheds light on the economic trade-offs of data regulation, offering empirical evidence to inform business decisions.
The Economic Transformation of Generative AI: Key Insights on Productivity and Job Creation: If there is any macroeconomic impact of artificial intelligence we have been looking to frame (as part of this OODA Loop Macroeconomics of AI in 2025 series), it is this: “Fears of large-scale technological unemployment are probably overblown. The history of general-purpose technologies shows that the growth they bring is accompanied by a strong demand for labor. However, this increased demand is often in new occupations. For example, more than 85% of total U.S. employment growth since 1940 has come in entirely new occupations.” In this post: find further valuable insights from MIT Professor Andrew McAfee’s report Generally Faster: The Economic Impact of Generative AI (produced while Andrew was the inaugural Technology & Society Visiting Fellow at Google Research in 2024). Generative AI could enable nearly 80% of U.S. workers to complete at least 10% of their tasks twice as fast without quality loss, creating substantial economic opportunities and reshaping the labor market.
The AI Acceleration of Moore’s Law: Actionable “Long Task” Performance Metrics for Your Business Strategy: AI is evolving faster than expected: researchers at METR have discovered that AI’s ability to autonomously complete tasks has been doubling every 7 months since 2019, mirroring Moore’s Law. If this trend holds, AI could independently execute month-long human-equivalent projects by 2030, transforming automation, workforce dynamics, and business strategy.
Big Tech’s Cloud Dominance: Fueling the AI Arms Race: In the rapidly evolving landscape of artificial intelligence (AI), cloud computing has emerged as the linchpin of Big Tech’s strategy to dominate the AI frontier. As companies like Microsoft, Amazon, and Google pour billions into AI development, their expansive cloud infrastructures are not just supporting this growth – they’re driving it.
Managing AI’s Economic Future: Strategic Automation in a World of Uncertainty: RAND’s roadmap for AI-driven economic policy confronts the high-stakes trade-offs of growth, inequality, and global competition.
Mapping the AI Economy: Task-Level Insights from Millions of Claude Conversations: In a recent seminar at the Stanford Digital Economy Lab, Alex Tamkin of Anthropic presented findings from an analysis of over four million Claude conversations to reveal how AI is currently used in real-world economic tasks. The study identifies where and how AI tools like Claude augment or automate work by mapping AI activity to the U.S. Labor Department’s O*NET job database.
Thriving in a Post-Labor Economy of AI and Automation : Explores how organizations, governments, and individuals can proactively adapt to the accelerating displacement and augmentation of human labor by AI systems. The article frames the post-labor economy as a strategic opportunity rather than a threat (emphasizing the need for new educational paradigms, universal digital infrastructure, and institutional agility to manage the transition).
Notable Voices on The Post Labor Economy: Things will be better, faster, cheaper, and safer: A chorus of techno-optimist perspectives from leading thinkers like Marc Andreessen, Sam Altman, and Balaji Srinivasan, argues that the rise of AI and automation will usher in a future of abundance, efficiency, and safety. These voices suggest that as AI offloads more labor, human creativity and well-being will flourish, not diminish. This divergence underscores a critical tension: between public techno-utopian narratives and the quiet, risk-averse signals emerging from the boardrooms of America’s largest companies.