Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > OODA Original > Disruptive Technology > Biological Smoke Detectors for the Digital Age: A Better Future of AI and Bio Security

Biological Smoke Detectors for the Digital Age: A Better Future of AI and Bio Security

In recent years, I’ve observed a concerning trend in discussions about artificial intelligence and biotechnology: the rise of “doomers” who paint apocalyptic scenarios about these technologies. While I acknowledge these technologies present real risks, as someone who served as a Senior National Intelligence Service Executive and recently briefed the U.S. Congress on AI, I believe we need a more nuanced, practical approach to AI and bio security, one that distinguishes between knowledge and expertise, and focuses on decentralized solutions rather than centralized control.

Moreover, as someone who has responded to the events of 9/11, anthrax in 2001, West Nile Virus, SARS and monkeypox in 2003, ricin events, and other biological scenarios and prevented some from becoming catastrophic, I can tell you firsthand: the ways we detect and stop bad actors attempting to use bioweapons is not by limiting information. This approach misunderstands the fundamental nature of both expertise and security.

Knowledge vs. Experience: A Critical Distinction

The reality is that “knowledge of something” does not equate to “experience in” doing something. We all can read online about how to perform surgery, yet without extensive practice on cadavers and simulated patients, we’re not ready to actually perform surgery. This distinction is crucial when we consider the intersection of AI and biotechnology.

The purported doomsday assumption that AI will inevitably create new malign pathogens misses three key points. First, AI systems are only as good as the data they’re trained upon. For an AI to be trained on biological data, that data must first exist, which means it’s already available for humans to use with or without AI.

Second, the focus should be on preventing bad actors from misusing biotechnology. Experimentation requires months of a human actor doing silica computer or in situ (original place) experiments or simulations. This work would require access to chemical and biological reagents, which could alert law enforcement authorities and yield other signatures of preparatory activities in the real world.

The sheer complexity of different layers of biological interaction, combined with the risk of certain types of generative AI producing hallucinated or inaccurate answers, makes the risk less significant than it might initially seem. The risks of expert human actors working together across disciplines in a concerted fashion represent a much more significant threat.

The Collision of Biology and Data Science

We are witnessing the collision of two scientific worlds: biological science and data science. As I’ve previously noted, converging advances in these fields are decreasing the cost and time it takes to understand how living systems function, providing the foundation for new medical treatments, sustainable energy solutions, and more nutritious food.

The cost of sequencing the human genome has fallen dramatically in recent decades. As a result, genome sequencing has increased, but our ability to understand the information it contains has not kept pace. This is where AI can help make sense of this growing volume of data.

In 2020, an AI called AlphaFold solved a 50-year-old problem of translating the chemical formulas for proteins into their three-dimensional shapes. By optimizing the time required to reach these solutions, the model opened new paths for drug discovery and design, including research on antibiotic resistance, cancer, and countering COVID-19.

The decisions and actions communities take today will shape the next 30 to 40 years of both fields of science and the associated benefits that emerge at their intersection. One future path leads to benefits being shared with communities, plants, animals, and the Earth’s global ecosystem. An alternative path leads to benefits being held by the few, locked behind restrictive legal and digital barriers.

Moving Beyond Censorship

I am passionate about challenging those who claim that AI is going to lead to certain doom. Sadly, like any technology it will cause harm to individuals and societies, and at the same time, certain flavors of AI will do tremendous good too. Meanwhile, I get concerned that writing filters (read: censors) for AI is somehow a solution consistent with U.S. values or long-term success given where things are going globally.

The United States should not believe that limiting knowledge is the answer here, for two critical reasons. First, this approach will simply force the knowledge underground, where it becomes harder to monitor and regulate. Second, and perhaps more importantly, nature itself will continue to possibly evolve pathogens that mean humans harm, regardless of our information controls. We need to be prepared for both natural and human-caused threats.

I am strongly against solutions that think censorship of information will succeed. Bio + AI does not automatically mean certain doom; in fact, it might be the only way we mitigate climate change-related impacts to our world.

At any rate, I find many books on this topic to be philosophical treatises disconnected from the reality of how tech or other scientific fields operate. The prescriptive guidance they have given may have prompted us to over-index on possible far-off future risks while missing the very real “here-and-now” issues that need urgent solutions for most people.

The Smoke Detector Paradigm for 2025

If we went back in time 150 years ago and told people that we could solve some of the issues of burning buildings (and burning cities) with private-sector provided smoke detectors and fire alarms that then notified fire departments to respond, folks might have thought we were crazy. Yet this type of decentralized solution succeeded.

What do inexpensive “smoke detectors” for 2025 look like on these issues? This is where AI and biotech can reap the biggest benefit. Specifically, AI and biotech can identify indicators and warnings to detect risky pathogens, as well as spot vulnerabilities in global food production and climate change-related disruptions to make global interconnected systems more resilient and sustainable.

Such an approach would not require massive intergovernmental collaboration before researchers could get started. Privacy-preserving approaches using economic data and aggregate information could provide early warnings without compromising individual privacy or security.

Practical Recommendations for Policymakers

  1. Invest in Decentralized Detection Systems: Rather than focusing solely on centralized control of information, invest in widespread, affordable detection systems for biological agents, both natural and human-caused. These “biological smoke detectors” could be deployed in public spaces, transportation hubs, and water systems.
  2. Develop Public-Private Partnerships: Encourage collaboration between government agencies and private companies to develop and deploy these detection systems. The private sector often moves faster and more efficiently than government alone.
  3. Focus on Response Capabilities: Enhance our ability to respond quickly to detected threats. This includes developing rapid response teams, stockpiling necessary medical countermeasures, and creating clear communication channels.
  4. Promote International Cooperation: Biological threats don’t respect national boundaries. Work with international partners to create global detection networks and response protocols.
  5. Balance Innovation and Security: Avoid heavy-handed regulations that stifle innovation. Instead, create frameworks that allow for scientific advancement while building security considerations from the start.
  6. Invest in Education and Training: Develop programs to train the next generation of biosecurity experts who understand both the biological and computational aspects of these challenges.
  7. Establish Certified AI and Data Scientists: Create a professional certification program for AI and data scientists working in biological fields, requiring them to take a digital Hippocratic oath, like how CPAs are certified in their profession. These professionals would commit to staying vigilant, continuously improving their skills, and evolving better solutions in these important spaces, all while prioritizing public safety and ethical considerations.
  8. Create Incentives for Security: Develop incentives for researchers and companies to build security in their work from the beginning, rather than as an afterthought.

The Path Forward

Should we use non-obvious ways to detect, deter, and interdict if needed bad human actors? Certainly yes. Yet the United States needs to do better when it comes to the future of AI and bio security. We need approaches that recognize the distinction between knowledge and expertise, which focus on decentralized detection and response rather than centralized control, and that balance security concerns with the tremendous potential benefits these technologies offer.

The time has come for us to harness our greatest U.S. strengths: innovation, collaboration, and pragmatic optimism. By bringing together our brightest minds across disciplines, embracing responsible technological advancement, and building systems that protect while empowering, we can transform these converging revolutions in AI and biotechnology from perceived threats into our greatest opportunities.

This is our moment to create a future where technology serves humanity’s highest aspirations, where biological and digital innovations work in harmony to solve our most pressing challenges, and where the benefits of these breakthroughs are accessible to all. Let us move forward not with fear, but with vision, determination, and the courage to build something extraordinary together.

David Bray

About the Author

David Bray

Dr. David A. Bray is a Distinguished Fellow at the non-partisan Henry L. Stimson Center, non-resident Distinguished Fellow with the Business Executives for National Security, and a CEO and transformation leader for different “under the radar” tech and data ventures seeking to get started in novel situations. He also is Principal at LeadDoAdapt Ventures and has served in a variety of leadership roles in turbulent environments, including bioterrorism preparedness and response from 2000-2005, Executive Director for a bipartisan National Commission on R&D, providing non-partisan leadership as a federal agency Senior Executive, work with the U.S. Navy and Marines on improving organizational adaptability, and with U.S. Special Operation Command’s J5 Directorate on the challenges of countering disinformation online. He has received both the Joint Civilian Service Commendation Award and the National Intelligence Exceptional Achievement Medal. David accepted a leadership role in December 2019 to direct the successful bipartisan Commission on the Geopolitical Impacts of New Technologies and Data that included Senator Mark Warner, Senator Rob Portman, Rep. Suzan DelBene, and Rep. Michael McCaul. From 2017 to the start of 2020, David also served as Executive Director for the People-Centered Internet coalition Chaired by Internet co-originator Vint Cerf and was named a Senior Fellow with the Institute for Human-Machine Cognition starting in 2018. Business Insider named him one of the top “24 Americans Who Are Changing the World” under 40 and he was named a Young Global Leader by the World Economic Forum. For twelve different startups, he has served as President, CEO, Chief Strategy Officer, and Strategic Advisor roles.