Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Cybersecurity and Cyber Incidents: Innovation and Design Lessons from Aviation Safety Models and a Call for a “Cyber NTSB”

Cybersecurity and Cyber Incidents: Innovation and Design Lessons from Aviation Safety Models and a Call for a “Cyber NTSB”

In a recent 4-month long workshop, over 70 experts explored the concept of creating a “Cyber NTSB”. This workshop topic is consistent with themes like innovation and design processes for innovation, which cut across much of our recent OODA Loop research and analysis.  It all starts with a design metaphor: Neal Stephenson’s speculative fiction provides the design metaphor for Bob Gourley’s What to Know and Do About the Coming Metaverse.  Hacker culture is the metaphor and design thinking framework for Matt Devost’s HACKThink.

This recent workshop used the National Transportation Safety Board as a design analogy/metaphor for a National Cyber Safety Board/National Cyber Security Board (NCSB). Specifically, innovation in “lesson-learning systems” for cybersecurity and cyber incidents – taking design process inspiration from the aviation safety models of the NTSB – was the goal of this “Cyber NTSB” workshop.  The AI research and policy community, when framing issues around AI security and AI systems adoption and deployment, is also exploring an incident/accident metaphor borrowed from the aviation industry, including the availability of an open-source taxonomy architecture and an artificial Intelligence incident database.

Like the “Cyber NTSB”, organizational design and innovation from other markets or industries can also inspire design processes, like our recent analysis of AI-based ambient intelligence innovation in healthcare and the future of public safety. One could also take design cues from the high-performance computing/data/simulation “stack” and the networked, global collaboration model used by NASA to reveal climate impacts on global crops.  For decades, computing architecture design and innovation has been influenced by and found success drawing inspiration from biological neural systems (aka the brain) and advancing our understanding of the vital role of such systems in solving complex problems.

Cybersecurity Regulation, Investigation Frameworks, and Lesson-learning Systems for Cyber Incidents

This “Cyber NTSB” workshop is a very interesting development. An important element to focus on is the fact that it happened at a time when there seems to be a saturation or onslaught (a ‘scope creep’ or ‘analysis paralysis’) of recommendations, frameworks, and research on how best to think about innovation and design processes for cybersecurity regulatory organizations.  The Cyber NTSB workshop report, “Learning from Cyber Incidents – Adapting Aviation Safety Models to Cybersecurity: Report on the Interdisciplinary Workshop on the Development of a National Capacity for the Investigation of Cyber Incidents”, takes this problem head-on:

“There are many immediate opportunities to improve security that policymakers in Congress and the Administration are pursuing. Similarly, there has been an explosion of research in information security, assurance, resilience, and other disciplines. These have obscured and eclipsed the need for deep lesson learning systems.”

Where and how do we start working on solutions amongst these competing approaches? A quick analogy from another security space: NATO and The DoD have embarked on an organizational design process based on core principles of ethical artificial intelligence. It is an important organizational design step. These principles will then become operational for NATO – impacting budgets, the design of testbed centers, NATO collaborative models for AI, AI security system design specifications, etc. DoD is further along in operationalizing these AI principles. AI researchers found that the DoD and NATO principles and design efforts very much align with an analysis of 80 global institutions’ announcements and design processes around ethical AI principles.

This type of broad alignment matters. Markets do not like uncertainty. Apparently, neither do new organizational designs and innovation for regulation and oversight which need to prioritize trust, integrity, authenticity, and transparency.  There is a lot of this type of organizational design and systems thinking happening right now to address big global challenges and domestic threats.  To be frank: at times this glut of thought leadership, reports, research, and think pieces seem as overwhelming as the threats themselves. Research on these issues feels like yet another frustrating aspect of this unique era of parallel global crises and uncertainties (cybersecurity, emerging technologies, climate change, disinformation, etc.).

Ethical AI efforts seem to be coming into focus and standardizing.  Less clear: Cybersecurity regulation, investigation frameworks, and lesson-learning systems for cyber incidents – and the design processes for innovation to frame how to think about these issues.

NTSB May Prove the Correct Design Metaphor

Organized by Harvard’s Belfer Center and Northeastern University’s Global Resilience Institute, the workshop and final report were funded by the National Science Foundation and the Hewlett Foundation. The workshop participants and the report are unabashed in their evangelism for the creation of a “Cyber NTSB”:

“While participants challenged and tested the model, the ultimate conclusion was that the information technology industry does not have strong processes for extracting lessons learned and publishing them when incidents occur. Today, cybersecurity has no authoritative, independent investigations whose focus is learning lessons, distributing them, and enabling systematic improvements.”

“Companies are unlikely to fully cooperate under a voluntary regime. Subpoena authority will likely be necessary for a board to succeed in gaining access to the necessary data and people to reconstruct a timeline and narrative of any incident. While the nascent Cyber Safety Review Board (CSRB) may be able to gain some insights into SolarWinds given the high profile of that incident, reviews of other incidents will likely be near impossible unless companies are required to cooperate.”

This strong advocacy is a good thing, as they will have to continue to position their arguments for an independent NCSB “with teeth” against many competing efforts, like the nascent Cyber Safety Review Board (CSRB), The newly-formed Office of the National Cyber Director, and CISA’s Joint Cyber Defense Collaborative effort with the private sector.  Maybe all these organizational design processes and innovations will fuse, in the end, into the creation of a standalone NCSB. If that proves the case, then the history of how the NTSB was taken out of the Department of Transportation and made an independent agency in 1974 would further prove the NTSB was, in the end, the right design metaphor.

A direct link to the workshop report:  Learning from Cyber Incidents – Adapting Aviation Safety Models to Cybersecurity: Report on the Interdisciplinary Workshop on the Development of a National Capacity for the Investigation of Cyber Incidents

Further Reading – Innovation, Organizational Design, and Design Processes for Innovation:

“AI Accidents” framework from the Georgetown University CSET

NATO and US DoD AI Strategies Align with over 80 International Declarations on AI Ethics

OODA Loop – The Five Modes of HACKthink

OODA Loop – What To Know And Do About The Coming Metaverse

Brain-inspired, Light-enabled, Circuit-fueled: Neuromorphic Computing Innovation, Intel’s Chip Platform, and Open-Source Developer Ecosystem

AI-Based Ambient Intelligence Innovation in Healthcare and the Future of Public Safety

Related Reading:

Black Swans and Gray Rhinos

Now more than ever, organizations need to apply rigorous thought to business risks and opportunities. In doing so it is useful to understand the concepts embodied in the terms Black Swan and Gray Rhino. See: Potential Future Opportunities, Risks and Mitigation Strategies in the Age of Continuous Crisis

Cybersecurity Sensemaking: Strategic intelligence to inform your decisionmaking

The OODA leadership and analysts have decades of experience in understanding and mitigating cybersecurity threats and apply this real world practitioner knowledge in our research and reporting. This page on the site is a repository of the best of our actionable research as well as a news stream of our daily reporting on cybersecurity threats and mitigation measures. See: Cybersecurity Sensemaking

Corporate Sensemaking: Establishing an Intelligent Enterprise

OODA’s leadership and analysts have decades of direct experience helping organizations improve their ability to make sense of their current environment and assess the best courses of action for success going forward. This includes helping establish competitive intelligence and corporate intelligence capabilities. Our special series on the Intelligent Enterprise highlights research and reports that can accelerate any organization along their journey to optimized intelligence. See: Corporate Sensemaking

Artificial Intelligence Sensemaking: Take advantage of this megatrend for competitive advantage

This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process. This includes a special guide for executives seeking to make the most of AI in their enterprise. See: Artificial Intelligence Sensemaking

COVID-19 Sensemaking: What is next for business and governments

From the very beginning of the pandemic we have focused on research on what may come next and what to do about it today. This section of the site captures the best of our reporting plus daily daily intelligence as well as pointers to reputable information from other sites. See: OODA COVID-19 Sensemaking Page.

Space Sensemaking: What does your business need to know now

A dynamic resource for OODA Network members looking for insights into the current and future developments in Space, including a special executive’s guide to space. See: Space Sensemaking

Quantum Computing Sensemaking

OODA is one of the few independent research sources with experience in due diligence on quantum computing and quantum security companies and capabilities. Our practitioner’s lens on insights ensures our research is grounded in reality. See: Quantum Computing Sensemaking.

The OODAcast Video and Podcast Series

In 2020, we launched the OODAcast video and podcast series designed to provide you with insightful analysis and intelligence to inform your decision making process. We do this through a series of expert interviews and topical videos highlighting global technologies such as cybersecurity, AI, quantum computing along with discussions on global risk and opportunity issues. See: The OODAcast

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.