Start your day with intelligence. Get The OODA Daily Pulse.
By Dr. David Bray and Jeff Jonas
The democratization of technology has created the democratization of danger. Individuals and small groups can now deploy drones, personal robots, gene-editing tools, and other accessible technologies to cause harm that once required nation-state resources. The most dangerous response to this reality is fear-driven overreach—a “knee-jerk overswing” toward centralized surveillance and control that sacrifices the liberties we seek to protect.
History offers guidance. When TNT and dynamite became widely accessible in the early 1900s, free societies adapted without becoming permanent police states through targeted regulation, improved investigation, and social norms. We must apply similar wisdom today.
This article proposes a two-part strategy. First, deploy foundational capabilities now: decentralized “smoke detector” sensor grids with privacy-by-design, systems that connect data across silos without mass surveillance, and bio-awareness networks for early pathogen detection. Second, prepare liberty-preserving crisis response options that empower individuals and communities—from acoustic drone detectors to air quality monitors—rather than expanding state power.
Critical to all approaches are safeguards: community consent, independent oversight, data minimization, sunset clauses, and redress rights. Technology is only as liberty-preserving as the governance structure surrounding it. Society should examine these approaches now, with sufficient time for democratic deliberation, rather than waiting until crisis forces hasty decisions.
On November 2, 2024, federal agents arrested Skyler Philippi moments before he powered up a commercial drone carrying three pounds of C-4 explosives. His target: a Nashville electrical substation that, if destroyed, would have left thousands without power, including hospitals, just three days before the U.S. presidential election. The 24-year-old had built the drone himself, purchased black powder for pipe bombs online, and conducted reconnaissance using techniques learned from studying previous power grid attacks. The barrier between intent and capability? A few hundred dollars in consumer equipment and several months of self-directed research.
The long arc of the last thirty years has bent toward a singular reality: the democratization of power is inextricably linked to the democratization of risk. For most of the 20th century, the capacity to project significant force, cause mass disruption, or inflict strategic damage was the exclusive province of nation-states. It required massive industrial bases, centralized command structures, and billions of dollars in research and development. If you wanted to threaten a city, you needed an air force. If you wanted to engineer a biological threat, you needed a state-sponsored laboratory.
Today, that monopoly has dissolved. As we stand in 2026 and look toward the decade ahead, we are witnessing the rise of the “super-empowered” individual. This is a world where a hobbyist can weaponize a commercial drone, where accessible gene-editing tools can resurrect historical pathogens, or where coordinated attacks can cripple power grids. Personal robots, autonomous machines that can navigate our streets, enter our buildings, and interact with our physical world, are arriving rapidly and will amplify these risks even further. When the cost barrier to causing targeted harm drops below a month’s rent, or when 100 people each do this against 100 power stations in one large metropolitan area, we have a problem that traditional security frameworks were never designed to address. It is a world where the genetic code of the 1918 Spanish Influenza is public knowledge, and the CRISPR tools required to manipulate it are available to anyone with a credit card.
The barrier to entry for causing harm has never been lower. However, the most potent weapon in this new landscape is not the drone, the swarm, or the pathogen itself. It is the fear of the unknown. It is the corrosive suspicion among the public that their existing laws, their technology, and their social order are insufficient to keep them safe. When a free society loses faith in its ability to stabilize a crisis, it becomes vulnerable to a “knee-jerk overswing,” a desperate rush toward centralized control that sacrifices the very liberties we seek to protect.
This article is not a warning of doom. It is a call to action. We have the tools, the ingenuity, and the historical precedent to navigate this challenge. By using our collective foresight now, we can identify actions that local governments, national authorities, and the private sector can take today to build resilience. Simultaneously, we can prepare a set of options for the day an unavoidable crisis occurs, options that allow us to respond with strength without dismantling our civil liberties or the “right to be left alone” that Justice Brandeis called the most comprehensive of rights.
We are not the first generation to face the threat of technology outpacing governance. In the early 1900s, the world was gripped by a wave of anarchist violence fueled by the invention and proliferation of dynamite and TNT. Suddenly, high explosives, previously difficult to manufacture and transport, were now generally accessible. A single individual with a grievance could carry a suitcase capable of leveling a building. The fear was palpable. “Infernal machines” were used to devastating effects in public squares, and the tragic Bath School disaster of 1927 remains a haunting reminder of the lethality of that era.
The public outcry for safety was deafening, and the calls for total state control were loud. Yet, free societies did not become permanent police states. We did not ban the industrial utility of explosives, nor did we place a government agent in every kitchen. Instead, society adapted. We regulated the precursors of explosives. We improved investigative techniques. We established societal norms that marginalized the actors while preserving the utility of the technology, and we managed the risk without destroying our way of life.
We must apply this same wisdom to the challenges of 2026 and the decade ahead. The goal is not to eliminate all risk, which is impossible, but to manage risk in a way that preserves human agency and the foundations of a free society.
Managing risk requires information, and this is where our modern systems often fail. In reviewing the major intelligence failures of the last two decades, from the 9/11 attacks to the response to Hurricane Katrina, a consistent theme emerges. It is rarely a lack of data that leads to disaster; it is a failure to connect the data that already exists.
This is consistent with research on “Knowledge Ecosystems” conducted after 9/11 and the anthrax attacks of 2001. As documented by the 9/11 Commission, more than sufficient knowledge existed to mitigate these events, but the knowledge was in a highly distributed and fragmented form across multiple departments and agencies. Back then, only specialized agencies like the CIA and CDC regularly faced situations where they had to rapidly piece together fragmented information to prevent disasters. Nowadays almost every organization in the world, in both the private and public sector, faces similar challenges where no one individual possesses sufficient knowledge to either mitigate negative outcomes or capitalize on positive opportunities.
Yet there are solutions, including privacy-preserving ways to better connect data and make sense of incomplete information about risks, threats, and counter-measures. Such solutions are key to both preventing future adverse events of significant consequence and to responding more quickly to active threats and forensics after an incident. In a world of democratized risk, we cannot afford to keep our puzzle pieces in separate boxes. We must connect the dots at the speed of the threat, without building a massive, central database that violates the privacy of the innocent.
We propose a shift from traditional public safety approaches akin to a “Fire Brigade” model to a decentralized “Smoke Detector” model.
One hundred and fifty years ago, fire safety was entirely reactive. If a fire started, you relied on a centralized response, the fire brigade, to come and save you. Often, they arrived too late. Today, we have ubiquitous, private-sector “smoke detectors” in almost every building. These devices fundamentally changed the math of fire safety.
Crucially, the smoke detector is a decentralized, edge-based solution. It sits in your home, but it does not report on your cooking habits to the government. It does not record your conversations. It is “blind” to everything except the specific signature of danger, particulate matter in the air. When it detects that signature, it alerts you directly, giving you the agency to act. It empowers the individual to be the first line of defense.
We must replicate this model for the drone and bio-threats of the 21st century. We need systems that can detect the “smoke” of a drone swarm or a biological release without watching the people they are protecting. This is not about building a panopticon. It is about building a digital immune system that respects the individual while protecting the community.
The following actions represent a strategy that local and national governments, as well as the private sector, should initiate immediately. These are foundational investments that improve our security and resilience regardless of whether a major attack happens tomorrow or in ten years.
Local governments need to stop waiting for a federal mandate. Municipalities, in partnership with the private sector, should begin studying deployment strategies for decentralized sensor grids in high-density urban areas and near critical infrastructure, which may include early pilot programs to test and refine these approaches. These are not cameras in the traditional sense. Like a smoke detector, these sensors should be designed to be “blind” to personally identifiable information.
Using a combination of audio triangulation and low-resolution optical sensors, these grids can detect the unique frequency of drone rotors or the flight patterns of an inbound swarm. Similar sensor networks can also detect anomalous biological signatures, unusual chemical releases, or other environmental threats—providing a comprehensive early warning system for multiple threat vectors. By placing the processing at the “edge,” meaning the data is analyzed on the device and only an alert is sent, we protect the privacy of citizens while providing early warning to first responders. This technology exists today; it simply needs to be fielded with a governance framework that ensures it remains a tool for safety, not control. Deployment should require community consent through local democratic processes, with ongoing public transparency about what data is collected and how it’s used.
We must improve our ability to connect data across silos without creating a central “Big Brother” database. This is where advanced entity resolution technologies come into play. We can build systems that allow different organizations, such as law enforcement, public health, and private security, to check if they are holding related “puzzle pieces” without revealing the underlying data unless a match is found.
For example, if a chemical supplier sells a large quantity of a dual-use reagent to an unknown entity, and a rental truck company rents a vehicle to that same entity, neither event is inherently suspicious on its own. But when connected, they form a picture of intent. With privacy-preserving context computing, we can identify these intersections in real-time and alert humans only when the threshold of risk is crossed. This approach respects data sovereignty while enabling the rapid connection of dots that has eluded us in past crises. Data should be retained only as long as necessary to assess immediate threat and automatically deleted when the risk threshold is not met, with individuals having redress rights.
Before discussing biological detection, we must address a common misconception that fuels excessive fear and counterproductive policy responses. There is a profound difference between “knowledge of something” and “experience in doing” something. Consider the path of a medical doctor. A physician requires textbook knowledge, certainly, but that knowledge alone does not make a surgeon. A surgeon must spend many long hours of practice, first with cadavers, then with patients under the close supervision of an experienced mentor, before they gain true expertise. An AI might tell a human how to perform a liver transplant, step by step, but almost all humans without that extensive practice and training are not ready to perform a liver transplant. They would fail, and the patient would die.
This distinction is critical because it should temper our fear. The mere availability of information, whether from an AI, a textbook, or a scientific journal, does not automatically translate into the capability to cause sophisticated harm. Weaponizing a pathogen or building a reliable drone swarm requires not just knowledge, but months or years of trial and error, failed experiments, and hard-won expertise. When we immediately equate information with imminent danger, we risk knee-jerk reactions toward censorship or excessive AI paranoia. This reaction is not only inconsistent with the values of a free society, but it is also counterproductive. It forces information underground, where it cannot be studied or countered, and it fails to prepare us for naturally evolving pathogens that require no malicious actor at all.
We cannot “un-invent” dangerous knowledge. The genie is out of the bottle. The genome of the 1918 Spanish Influenza is public, and CRISPR is widely available. Trying to censor scientific papers is a futile exercise.
Here, we must return to our foundational distinction: knowledge of is different from expertise in doing an activity. The fact that a genome is published does not mean that any individual can weaponize it. Reanimating a pathogen requires not just the “textbook” information, but months of laboratory work, failed experiments, and hard-won practical skill. Just as reading about a liver transplant does not make one a surgeon, reading a scientific paper does not make one a bioweaponeer. When we panic and rush to censor, we not only violate the principles of open inquiry that underpin scientific progress, but we also fail to address the real threat: the small number of individuals who might acquire the expertise to cause harm.
Instead of restricting knowledge, we must focus on detecting the application of that knowledge for harm. This means normalizing the use of automated wastewater testing and air sampling in transportation hubs and major cities. This provides a “digital immune system” for our population. Just as a smoke detector sniffs the air for fire, these systems analyze the biological “exhaust” of a city to detect anomalies, novel pathogens or spikes in specific agents, days, or weeks before clinical cases show up in hospitals. This allows us to act “left of boom,” containing a biological event before it becomes a pandemic. In the same way the polio vaccine handled polio, advance knowledge of emerging threats can be used to design ideal remedies for future biological threats. The key is early detection, not censorship.
Despite our best efforts, a significant drone-related crisis or a biological event may still occur. When it does, the public pressure to “do something” will be immense. History tells us that in moments of fear, societies are prone to trading liberty for the illusion of security. To prevent a permanent slide into centralized control, we need to identify viable, action-oriented options now, just as free societies tackled the risks of TNT in the past.
The table below presents six candidate solutions that give individuals, businesses, and communities the tools and choices to protect themselves without relying solely on a centralized state response. These options should be assessed, developed, tested, and made available so that when a crisis occurs, we have a menu of liberty-preserving responses ready to deploy.
Note: The following candidate solutions focus primarily on drone threats, but similar principles apply to other democratized dangers.
| Solution | Description | Empowerment Angle | Legal Viability |
| Optical Dazzlers and Distractors | Devices that use bright light patterns to “blind” or confuse a drone’s camera or optical sensors, causing it to lose its target lock or abort its mission. | Gives individuals and businesses a non-destructive way to protect their immediate space. Akin to anti-paparazzi counter-measures. | Generally legal as they do not destroy the drone or jam frequencies. Legislation should clarify and protect this right. |
| Personal Acoustic Detection Devices (“Drone Smoke Detectors”) | Inexpensive, off-the-shelf sensors that individuals can install at home or businesses can deploy. These detect the specific acoustic signature of drone rotors and alert the owner, giving them time to seek cover or activate other defenses. | Provides “human agency” through early warning. You are not helpless; you have information to make your own choices. | Fully legal. Passive detection only. No interference with the drone. |
| Personal Air Quality Monitors (“Biological Smoke Detectors”) | Advanced, affordable air quality sensors for homes and businesses that can detect anomalous particulates or biological signatures, alerting occupants to unusual airborne agents before symptoms appear. | Gives individuals real-time awareness of their immediate environment. You decide when to shelter-in-place, use filtration, or evacuate based on your own data. | Fully legal. Passive detection only. Consumer versions exist today and could be enhanced with biological signature detection capabilities. |
| Digital Signature Masking (Personal Radio Frequency Hygiene) | Devices or apps that allow individuals to randomize or mask their 5G phone signature when in public, making it harder for a drone to “lock on” to a specific person using their digital footprint. | Empowers individuals to control their own digital visibility, protecting the “right to be left alone.” | Legal. Similar in concept to VPNs for physical presence. Should be encouraged as a personal safety measure. |
| Geofencing Advocacy and “No-Go Zone” Beacons | Businesses and homeowners can advocate for and deploy “No-Go Zone” beacons that broadcast to compliant autonomous machines. Most commercial drones (e.g., DJI) already respect geofencing, and as personal robots become widespread, similar compliance protocols can prevent unauthorized entry. This creates a “digital fence” around private property. | Empowers property owners to define boundaries for compliant autonomous systems, whether aerial drones or ground-based robots. Extends the concept of property rights to all autonomous access. | Legal and already supported by major drone manufacturers. As personal robots emerge, similar protocols should be standardized. Outreach and advocacy are needed to expand coverage and compliance. |
| Community “Neighborhood Resilience Networks” | Volunteer networks trained to identify and report drone anomalies, akin to volunteer fire departments. Formalizes the “see something, say something” approach with training, communication channels, and coordination with first responders. | Channels the desire to help into effective, coordinated action rather than panic or vigilantism. Helps build and maintain community cohesion. | Fully legal. Observation and reporting only. Strengthens the social fabric. |
These six options we have put forward share a common philosophy: they empower individuals and communities to take action within a framework of law and liberty. They do not require a massive expansion of government power. They do not create a permanent state of observation. Instead, they give people the tools to protect themselves and the information to make their own choices. This is the essence of human agency.
Critical to all these approaches is a framework of safeguards: independent oversight, community consent, data minimization, sunset clauses requiring periodic reauthorization, and the right to know if you’ve been flagged by any detection system. Technology is only as liberty-preserving as the governance structure surrounding it. Also critical to success will be vigilance that scope creep doesn’t happen between (1) the “privacy-preserving” design principles and actual deployment and well as (2) what constitutes sufficient cause for checking connections and what’s done in practice. Focused assessment also will be necessary regarding defining thresholds of risks crossed as well as ensuring privacy remains protected when law enforcement, public health, and private security can cross-reference data.
This list is not exhaustive. We invite others—engineers, policymakers, civil liberties advocates, and community organizers—to propose additional approaches that empower individuals and preserve freedom while addressing emerging threats. The goal is to build a robust menu of options before crisis forces hasty decisions.
Justice Louis Brandeis famously spoke of the “right to be left alone” as the most comprehensive of rights and the one most valued by civilized men. In 2026, that right is under pressure not just from the state, but from the democratization of technology that allows any individual to reach into our private lives with autonomous machines like drones. As personal robots become commonplace in the next few years, capable of navigating our neighborhoods and entering our spaces, this pressure will only intensify.
The good news is that we have made more technological progress in the last fifteen years than in the previous fifty. We have the tools to detect these threats. We have the computing power to connect the puzzle pieces. The challenging news is that our governance structures are still running on 20th-century cycles, trying to solve 21st-century problems with 19th-century bureaucracy.
Society should begin examining these approaches now, with sufficient time for public deliberation and democratic oversight, rather than waiting until a crisis forces hasty decisions. By deploying “digital smoke detectors,” solving the puzzle piece problem with privacy-preserving technologies, and preparing a menu of liberty-preserving options for individuals and communities, we can stabilize our society. We can prove that free people do not need to become controlled people to be safe. We can manage the risks of this age of democratized danger with the same ingenuity and commitment to freedom that allowed us to survive the age of TNT.
Various technologies are ready and more will come. The architecture is defined. The only missing ingredient is the foresight to act before the crisis forces our hand. Let us build a future where human agency, not fear, is the defining characteristic of our response to the rapidly approaching future ahead.
About the Authors:
Dr. David A. Bray is a Distinguished Fellow and Chair of the Accelerator with the Alfred Lee Loomis Innovation Council at the non-partisan Henry L. Stimson Center. He is also a CEO and transformation leader for different “under the radar” tech and data ventures seeking to get started in novel situations. He is Principal at LeadDoAdapt Ventures, Inc. and has served in a variety of leadership roles in turbulent environments. He previously served as a non-partisan Senior National Intelligence Service Executive, as Chief Information Officer of the Federal Communications Commission, and IT Chief for the Bioterrorism Preparedness and Response Program. Business Insider named him one of the top “24 Americans Changing the World” and he has received both the Joint Civilian Service Commendation Award and the National Intelligence Exceptional Achievement Medal. David accepted a leadership role in December 2019 to direct the successful bipartisan Commission on the Geopolitical Impacts of New Technologies and Data that included Senator Mark Warner, Senator Rob Portman, Rep. Suzan DelBene, and Rep. Michael McCaul. From 2017 to the start of 2020, David also served as Executive Director for the People-Centered Internet coalition Chaired by Internet co-originator Vint Cerf. Business Insider named him one of the top “24 Americans Who Are Changing the World” and he was named a Young Global Leader by the World Economic Forum. For twelve different startups, he has served as President, CEO, Chief Strategy Officer, and Strategic Advisor roles. The U.S. Congress invited him to serve as an expert witness on AI in September 2025.
Jeff Jonas, founder and CEO of Senzing, is a data scientist and creator of Entity Resolution systems. For more than three decades, he has been at the forefront of solving complex big data problems for companies and governments. National Geographic recognized him as the Wizard of Big Data. Jonas sold his last company to IBM in 2005. Prior to founding Senzing, Jonas served as an IBM Fellow and Chief Scientist of Context Computing at IBM. He led a team focused on creating next-generation AI for Entity Resolution technology, code-named G2. At IBM G2 was deployed in many innovative ways, including modernizing U.S. voter registration through a joint effort with Pew Charitable Trust and helping the Singaporean government build a maritime domain awareness system to better protect the Malacca Strait. In 2016, Jonas founded Senzing, based on a one-of-a-kind IBM spinout of the G2 technology and team. He regularly meets with government leaders, industry executives and think tanks around the globe about innovation, national security and privacy. Jonas serves on the boards of Electronic Privacy Information Center (EPIC), and the advisory board of the Electronic Frontier Foundation (EFF).