Start your day with intelligence. Get The OODA Daily Pulse.
The global struggle to preserve Internet First Principles—open access, private communication, and free expression—has moved from philosophical debate to operational reality.
In the last two years, governments, corporations, and malicious actors alike began weaponizing the massive datasets generated by edge devices, satellites, biometric wearables, and smart infrastructure. Artificial intelligence has amplified this process, accelerating data analysis, deepfake generation, and influence operations.
Against this backdrop, the upcoming OODAcon 2025 session “Securing Internet First Principles: Access, Privacy, and Expression in the Age of Disruptive Technology” – a fireside chat with SnowStorm CEO Serene—arrives at a critical inflection point for global digital governance.
As OODAcon 2025 convenes leaders across technology, defense, and policy, this fireside chat will mark a crucial step toward operationalizing the founding ideals of the internet for an era defined by AI, surveillance, and disruption.
SnowStorm’s architecture provides a glimpse of what that operational synthesis looks like in practice—proof that privacy-preserving design can be both decentralized and intuitive.
The rise of SnowStorm – a decentralized micro-proxy network designed to provide censorship-resistant connectivity and privacy at the infrastructure level – signifies how the “communication layer” of the internet (where usability, accessibility, and internationalization intersect) has become a frontline for digital freedom.
As outlined on the company’s website FAQ page, SnowStorm transforms infrastructure into social technology, offering users around the world frictionless, privacy-preserving access without depending on centralized intermediaries.
Ultimately, defending access, privacy, and expression in the age of disruptive technology will require aligning three layers of capability: technological (privacy-enhancing technologies, PETs, and confidential compute), geopolitical (compute sovereignty, AI Sovereignty, and cross-border data governance), and human-centered (digital self-sovereignty, usability, internationalization, and accessibility).
Collectively, these efforts signal a shift from privacy pilots to production systems—where usability, automation, and trust-by-design replace academic proof-of-concept.
In this environment, the usability and internationalization of privacy tools at the communications layer—the so-called “glue layer”—becomes critical. And where access itself is at risk, distributed models like SnowStorm’s offer a pathway to maintain expression under censorship conditions.
The NIST Human-Centered Cybersecurity Community of Interest underscores the importance of designing for accessibility, comprehension, and cross-cultural adaptability. SnowStorm’s design philosophy embodies this shift: its distributed micro-proxy model allows users in restrictive environments to access the open internet without complex configuration, bandwidth-heavy VPNs, or centralized dependencies.
This approach reframes privacy as a usability problem as much as a technical one. Human-centered design now represents a strategic control surface, ensuring that privacy protections are not only available but actually used.
Decentralized data control frameworks (like what Tim Berners-Lee is designing at Inrupt with the Solid platform) have re-entered the mainstream. Solid decouples application logic from user data by storing personal information in “pods” that users own and permission directly—advancing the original value proposition of the web as a network of user-controlled information spaces.
This model aligns closely with the principles of privacy-by-design and data portability being codified by regulators worldwide. For enterprises managing regulated data flows in sectors like health, education, and finance, pod-based architectures combined with PETs offer a path to analytics without surveillance.
The episode revealed the tension between national-security imperatives and individual privacy rights, and highlighted the need for “crypto agility”—architectures that can adapt to shifting jurisdictional requirements without sacrificing core end-to-end protection.
Yet while innovation has accelerated, policy volatility around encryption exposed fragility in the global privacy consensus. In early 2025, the United Kingdom attempted to curtail Apple’s Advanced Data Protection for iCloud, citing law-enforcement access concerns. By late summer, after significant backlash from U.S. officials and privacy advocates, reports from outlets including The Verge and Associated Press confirmed the proposal’s retreat.
Regulatory enforcement also escalated, demonstrating that data misuse carries material costs. The European Union fined LinkedIn roughly €310 million for violations of GDPR’s purpose limitation and consent requirements—underscoring how adtech-driven data processing can translate directly into civil-liberties risk and shareholder exposure (AP coverage). For organizations, this marked a turning point: privacy is no longer a compliance checkbox but a fiduciary responsibility tied to trust and resilience.
If we are to defend access, privacy, and expression, we must build not only better protocols and tools, but a culture and a community based in better institutions and value structures around them.
The 2025 playbook emerging from these trends is clear:
By weaving governance, architecture, human rights, security, and market dynamics tightly together, these subthemes show that the future Internet is no longer just a technical artifact—but a contested system of norms, incentives, culture, community, and agency. Together, they illustrate that the future of the Internet will be defined not by technology alone—but by the alignment of architecture, governance, and human agency.
“The Internet is no longer borderless—it is becoming territorialized through data.”
Governance of the Internet has fractured into rival camps. Democracies continue to emphasize transparency and individual rights, while authoritarian regimes assert cyber-sovereignty—national control of information flows, compute, and content. The OECD’s 2025 report on Privacy-Enhancing Technologies (PETs) proposes a cooperative framework for sharing AI models and data responsibly, yet the global divergence in data governance remains stark.
These regulatory schisms are reshaping the foundational values of the web. Whether the next decade yields a universal network or a patchwork of controlled data zones will depend on how privacy norms, AI regulation, and trade policies converge—or collide.
“Compute is the new territory—and sovereignty now extends into silicon.”
The Internet’s physical and logical layers are fusing, with data collection, inference, and decision-making happen at every node. Control over compute geography now equals control over governance. The Center for a New American Security (CNAS) frames this as a national security issue: nations that dominate compute infrastructure will shape not only AI capability but also the privacy, encryption, and lawful-access standards of the future.
This trend highlights the importance of decentralized architectures. Privacy-centric alternatives like SnowStorm’s decentralized proxy mesh demonstrate that routing privacy, censorship resistance, and global accessibility can be engineered back into the network itself. Infrastructure is no longer neutral—it encodes political choice.
“Freedom of thought is the next frontier of free expression.”
The conversation around free expression has evolved beyond “content moderation” toward the defense of cognitive liberty—the right to think, reason, and dissent without algorithmic interference. As machine-learning models increasingly mediate access to information, civil society organizations such as Access Now and Article 19 are advocating that international rights frameworks expand to protect informational autonomy.
Technologies like PETs, decentralized identity, and differential privacy are emerging as technical safeguards for this right—tools that enable anonymity, reduce tracking, and prevent manipulation by predictive algorithms.
“The same models that power defense can power deception.”
AI has permanently reshaped the cybersecurity landscape. The MIT Sloan / Safe Security 2025 study found that 80% of ransomware campaigns now employ AI to enhance reconnaissance, targeting, and social engineering. This same AI is also transforming defense: autonomous red-teaming, AI-driven anomaly detection, and deception environments are now core to enterprise resilience.
As privacy, cybersecurity, and compliance converge, organizations are integrating OECD PETs and NIST Privacy-Enhancing Cryptography frameworks into their security operations. Protecting systems now inherently means protecting rights.
“Trust is the new currency of the data economy.”
Privacy has shifted from a regulatory burden to a market differentiator. The global PETs market—valued at roughly $4B in 2024—is projected to surpass $30B by 2030, driven by compliance requirements and consumer demand for trust-centric services. Privacy has transitioned from a compliance burden to a competitive differentiator.
Simultaneously, decentralized models such as Snowstorm and Solid, and verifiable credentials enable users to control their data directly. Investors are beginning to treat privacy infrastructure not as cost—but as capital investment in long-term resilience and user loyalty.
The OECD’s privacy-enhancing technologies policy overview and Future of Privacy Forum’s Global PETs Repository frame PETs as key enablers of secure data economies—tools that allow organizations to share insights without compromising confidentiality.
Simultaneously, decentralized models such as Snowstorm and Solid, and verifiable credentials enable users to control their data directly. Investors are beginning to treat privacy infrastructure not as cost—but as capital investment in long-term resilience and user loyalty.
“The future of the Internet will not be decided by code alone, but by the values we encode within it.”
Governance, infrastructure, rights, security, and markets are converging into a single system—an operating model for the next Internet. The architecture of tomorrow’s web will either entrench surveillance or empower sovereignty at the individual level. Whether the Internet of 2030 remains open will depend on how these domains are synchronized around shared First Principles: transparency, trust, and human agency
As OODAcon 2025 convenes leaders across government, industry, and civil society, the challenge is not merely to defend privacy—but to design the usable, decentralized, and resilient Internet that will sustain freedom in an age of ubiquitous computation.