Start your day with intelligence. Get The OODA Daily Pulse.
In my recent invited Congressional expert witness testimony before the U.S. House Judiciary Committee’s Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet, I emphasized that the United States is experiencing multiple simultaneous tech revolutions beyond just AI. As we navigate advances in space tech, biotech, quantum tech, and the miniaturization of sensors and robots, we face a critical inflection point for our AI industry.
The U.S. AI industry stands at a crossroads. While we’ve made remarkable progress in developing powerful AI systems, we must now focus on ensuring these technologies advance in ways consistent with U.S. values of freedom, privacy, and individual choice. Any national AI strategy should ensure we don’t stifle advancements toward reliable, trustworthy AI consistent with the values of both free societies and free markets.
First, our AI industry must recognize that companies and communities increasingly demand on-premise AI solutions. Organizations want options that don’t force them to be externally dependent on cloud services where they lose visibility into how their data is used. By some counts, a new Apple iPhone 17 has more computing power than a Cray-3 supercomputer did in 1993. With such computational power at our fingertips, we should be able to run and use AI at the edge without sending our data to AI companies.
At minimum, AI companies should advance federated learning approaches where algorithms can learn on data in-situation without “hoovering away” or pulling that data away from its source. This approach respects data sovereignty while still enabling powerful AI capabilities. Companies that fail to offer on-premise options risk losing market share to competitors who understand this growing demand.
The implications of this shift toward on-premise AI extend beyond just business considerations. When organizations can run AI locally, they maintain control over sensitive information, reduce latency in critical applications, and ensure continuity of operations even when internet connectivity is compromised. For sectors like healthcare, finance, and national security, these benefits are not merely preferences but essential requirements for responsible AI adoption.
Moreover, the environmental impact of constantly transmitting vast amounts of data to centralized cloud servers cannot be overlooked. Local processing reduces the energy consumption associated with data transfer and storage, aligning AI development with sustainability goals. U.S. AI companies have an opportunity to lead in developing energy-efficient, edge-optimized AI solutions that address both privacy concerns and environmental responsibilities.
Second, we must prioritize privacy in our AI development. The U.S. cannot allow AI to create surveillance economies that undermine our fundamental values. We should build on Justice Brandeis’s concept of the right to privacy, including individual choice about when personal data sets are and are not used by an AI. Consumers, companies, and communities deserve a choice about when they want AI assisting them and when they don’t.
This doesn’t mean abandoning personalization or convenience. Rather, it means creating thoughtful “choice architectures” that present clear tradeoffs when someone opts not to share information. However, this should remain an individual or corporate choice, not one forced upon people through operating systems or devices. We’ve already seen how the Internet degraded when advertising began to drive everything. AI will become a product no one trusts if people cannot control what happens with their personal data, usage patterns, and more.
Privacy is not merely a personal preference but a foundational element of democratic societies. When individuals lose control over their personal information, they may self-censor, withdraw from digital participation, or become vulnerable to manipulation. The consequences extend beyond individual harm to potentially undermine civic discourse and democratic processes.
The AI industry must recognize that privacy-preserving technologies are not obstacles to innovation but rather they are enablers of sustainable growth. Techniques such as differential privacy, homomorphic encryption, and secure multi-party computation allow AI systems to derive insights from data without compromising individual privacy. By investing in these approaches, U.S. companies can build AI systems that respect privacy by design while still delivering valuable functionality.
Third, the AI industry must embrace interoperability as a sign of industry maturity. If we truly value free market principles, we need AI systems that can work together across platforms and providers. This interoperability fosters competition, innovation, and consumer choice. Additionally, the industry must demonstrate appropriate levels of corporate responsibility to ensure people aren’t willfully harmed by AI products and services.
Just as the automotive industry gradually improved safety features, sometimes voluntarily, sometimes through regulation, the AI industry must proactively address safety concerns. The question is whether AI companies will make these advances themselves or have them mandated by law. As I recommended to Congress, upgrading existing domain-specific laws is more pragmatic than attempting new, sweeping AI regulations, but the industry can lead by establishing responsible practices before regulation becomes necessary.
Interoperability in AI systems offers numerous benefits beyond just market competition. When AI systems can communicate and work together effectively, they create an ecosystem where specialized solutions can complement each other, leading to more comprehensive and effective applications. This approach prevents vendor lock-in, reduces redundancy in development efforts, and accelerates innovation through collaborative improvement.
In my testimony, I highlighted how the healthcare sector benefited from interoperability standards through the non-profit Health Level Seven’s open standard framework for clinical data. Similar domain-specific approaches could be applied across sectors like transportation, finance, and education, creating interoperable AI ecosystems that respect both innovation and individual rights.
When we discuss AI and geopolitics in 2025, the fundamental question is whether U.S. AI companies will recognize their responsibility to advance versions of AI that enhance human liberties. It’s ultimately on them to help make this happen. We should encourage industry to advance AI solutions that can operate on local devices that we could operate ourselves if we choose.
The next 12-16 months will bring continued technological, social, and geopolitical changes. To remain innovative, relevant, and competitive, U.S. AI companies must recognize there are methods beyond just generative AI. They must invest in approaches that respect data sovereignty, enhance privacy, and promote interoperability.
This period represents a critical window of opportunity for the U.S. AI industry to establish global leadership not just in technical capabilities but in ethical standards. As other nations develop their own AI strategies, the U.S. approach to balancing innovation with values like privacy, autonomy, and interoperability will influence global norms. By demonstrating that AI can advance while respecting fundamental freedoms, U.S. companies can shape the trajectory of global AI development in ways that align with democratic principles.
Looking towards the near-future, the U.S. AI industry must prioritize research and development in approaches beyond just Generative AI, to include exploring Active Inference and other Bayesian-based approaches to ensure long-term leadership. By investing in these technologies, the U.S. can foster a more diverse and resilient AI ecosystem that is less reliant on massive datasets and computationally intensive training methods.
While generative AI has demonstrated impressive capabilities, its limitations in adaptability, reasoning, and energy efficiency necessitate exploring alternative approaches. Active Inference, with its focus on building “World Models,” offers a promising path forward. By enabling AI systems to actively predict and infer from their environment, Active Inference allows for more robust decision-making in uncertain and dynamic situations.
This is particularly crucial for applications requiring real-time adaptation and causal reasoning, such as autonomous systems, robotics, and complex problem-solving. These approaches align with my testimony’s emphasis that different AI methods produce different outcomes. By going beyond just Generative AI approaches to include Active Inference and other Bayesian-based approaches, U.S. companies can use these additional methods to both enhance the capabilities of AI systems and align with ethical considerations such as transparency, explainability, and energy efficiency, paving the way for a more sustainable and responsible AI future.
The U.S. has an opportunity to lead the world in developing AI that aligns with democratic values and individual freedoms. As I stated before Congress, we need a pragmatic approach to AI that prioritizes the business case before deployment, respects data, and recognizes the diverse needs of different communities, particularly in free and open societies.
The time for the U.S. AI industry to do better is now. By embracing on-premise solutions, respecting privacy choices, and fostering interoperability, our AI companies can set a global standard for responsible innovation. This isn’t simply good ethics, it’s good business. Companies that align their AI development with American values of freedom, choice, and privacy will gain competitive advantages in both domestic and international markets.
Let’s challenge our AI industry to rise to this occasion. The future of AI, and our technological leadership, depends on it.