Start your day with intelligence. Get The OODA Daily Pulse.
The release and success of ChatGPT have thrown policymakers into a frenzy of AI regulatory activity. Meanwhile, as with privacy and crypto regulation, the EU is setting the global standard with the recent passage of the EU Artificial Intelligence Act. Over the course of 2021 – as we reviewed a variety of research efforts concerned with AI Risk, AI Ethics, and Trustworthy AI – we made the following recommendations at the enterprise level:
In a July 2021 policy brief “AI Accidents: An Emerging Threat – What Could Happen and What to Do,” the Center for Security and Emerging Technology) (CSET) makes a noteworthy contribution to current efforts by governmental entities, industry, AI think tanks, and academia to “name and frame” the critical issues surrounding AI risk probability and impact.
https://oodaloop.com/archive/2021/08/19/ai-accidents-framework-from-the-georgetown-university-cset/
“For the current enterprise, as we pointed out as early as 2019 in Securing AI – Four Areas to Focus on Right Now, the fact still remains that “having a robust AI security strategy is a precursor that positions the enterprise to address these critical AI issues.” In addition, enterprises that have adopted and deployed AI systems also need to commit to the systematic logging and analysis of AI-related accidents and incidents.
Look no further than this CSET policy brief as a blueprint for such AI accident notification, reporting, and analysis efforts. To foster open collaboration within your organization, the opportunity also exists to build your reliability and safety engineering activities in collaboration with the open innovation and crowdsourcing database (with a very compelling open-source taxonomy architecture) on which CSET authors Zachary Arnold and Helen Toner have based their AI accidents framework and policy recommendations: The Artificial Intelligence Incident Database (AIID) – a project housed at the Partnership on AI.”
Recently, Senator Chuck Schumer announced plans for AI regulatory legislation in the U.S.:
“Schumer said that during the first half of the year, he and his team have been holding discussions with more than 100 AI developers, executives, scientists, workforce experts, and others to develop their legislative framework.
The “SAFE Innovation for AI” framework has five central pillars:
Not only in this framework proposed by Senator Schumer but in the general regulatory discussions that have ensued since the open letter calling for a pause on the training of AI systems more powerful than GPT-4 and OpenAI CEO Sam Altman testifying at the U.S. on “Oversight of A.I.: Rules for Artificial Intelligence, we have been surprised to see that the Review Board Model (like the NTSB or the recently launched Cyber Review Board) has not gained much traction (or been mentioned very often at all in the last few months).
As a result, we went back to the AI Incident Database. It has grown and evolved since its launch in 2021 and has the potential to become a private sector, “bottom-up” phenomenon with a platform designed for growth at speed and scale. It has the potential to evolve into a standalone, self-governing AI Review Board. The World Wide Web Consortium (W3C) comes to mind over the often-mentioned NTSB model. The CISA Cybersecurity Alerts and Advisories model probably offers some structural lessons as well.
And why not? The promise of the AI Incident Database is that the learnings from the AI accidents entered into the database can then become standards and guidelines over time (i.e. the aggregation of the AI accidents then becomes a template for mitigation standards and guidelines – possibly even regulations and laws). As the update below on the AI Incident Database illustrates, AI accidents are already occurring and have a mechanism and a taxonomy for reporting.
AI innovation is occurring at exponential speed. There are no good arguments against this the use growth of this platform by the public and private sectors in parallel with governmental efforts to figure out their role in a regulatory framework for Ai. Historically, that process will move at a snail’s pace – and this emerging technology is simply moving too fast to put all our eggs in the governmental regulatory basket. The AI Incident Database has all the design qualities and the ethos of the best elements of the open-source movement – making it the most viable collective intelligence risk mitigation effort relative to the success or failure of governmental policymaking efforts.
Since our previous analysis in 2021, the AIID has launched a monthly incident newsletter, of which April 2023 is the most recent publicly available version – and representative of the issues the database is currently tackling:
In April, there were several news stories centering around false accusations and impersonation. Whereas AI-powered voice synthesis systems were being used as tools by humans to impersonate loved ones for cash, break into a government system, make false swat calls, and mimic musicians, ChatGPT generated false stories of an Australian Mayor and a professor committing crimes.
The AI Incident Database (AIID) is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings.
Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. Much like the transportation sector before it (e.g., FAA and FARS) and more recently computer systems, intelligent systems require a repository of problems experienced in the real world so that future researchers and developers may mitigate or avoid repeated bad outcomes.
The initial set of more than 1,000 incident reports has been intentionally broad in nature. Current examples include,
You are invited to explore the incidents collected to date, view the complete listing, and submit additional incident reports.
The commercial air travel industry owes much of its increasing safety to systematically analyzing and archiving past accidents and incidents within a shared database. In aviation, an accident is a case where substantial damage or loss of life occurs. Incidents are cases where the risk of an accident substantially increases. For example, when a small fire is quickly extinguished in a cockpit it is an “incident” but if the fire burns crew members in the course of being extinguished it is an “accident.” The FAA aviation database indexes flight log data and subsequent expert investigations into comprehensive examinations of both technological and human factors. In part due to this continual self-examination, air travel is one of the safest forms of travel. Decades of iterative improvements to safety systems and training have decreased fatalities 81 fold since 1970 when normalized for passenger miles.
Where the aviation industry has clear definitions, computer scientists and philosophers have long debated foundational definitions of artificial intelligence. In the absence of clear lines differentiating algorithms, intelligence, and the harms they may directly or indirectly cause, this database adopts adaptive criteria for ingesting “incidents” where reports are accepted or rejected on the basis of a growing rule set defined in the Editor’s Guide.
The database is a constantly evolving data product and collection of applications.
When in doubt about whether an event qualifies as an incident, please submit it! This project is intended to converge on a shared definition of “AI Incident” through exploration of the candidate incidents submitted by the broader community.
The incident database is managed in a participatory manner by persons and organizations contributing code, research, and broader impacts. If you would like to participate in the governance of the project, please contact us and include your intended contribution to the AI Incident Database.
Voting Members
Helen Toner: Helen Toner is Director of Strategy at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen has written for Foreign Affairs and other outlets on the national security implications of AI and machine learning for China and the United States, as well as testifying before the U.S.-China Economic and Security Review Commission. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne.
Contributions: AI incident research and oversight of the CSET taxonomy.
Patrick Hall: Patrick is principal scientist at bnh.ai, a D.C.-based law firm specializing in AI and data analytics. Patrick also serves as visiting faculty at the George Washington University School of Business. Before co-founding bnh.ai, Patrick led responsible AI efforts at the machine learning software firm H2O.ai, where his work resulted in one of the world’s first commercial solutions for explainable and fair machine learning. Among other academic and technology media writing, Patrick is the primary author of popular e-books on explainable and responsible machine learning. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.
Contributions: Patrick is the leading contributor of incident reports to the AI Incident Database Project.
Sean McGregor: Sean McGregor founded the AI Incident Database project and recently left a position as machine learning architect at the neural accelerator startup Syntiant so he could focus on the assurance of intelligent systems full time. Dr. McGregor’s work spans neural accelerators for energy efficient inference, deep learning for speech and heliophysics, and reinforcement learning for wildfire suppression policy. Outside his paid work, Sean organized a series of workshops at major academic AI conferences on the topic of “AI for Good” and is currently developing an incentives-based approach to making AI safer through audits and insurance. Contributions: Sean volunteers as a project maintainer and editor of the AI Incident Database (AIID) project.
Non-Voting Members
All hope is not lost on the ability of the U.S. Government to act quickly. Following is a series of posts that capture the workshop in the Spring of 2021 (and the publication generated b the workshop released in November of 2021) that called for the creation of a “Cyber NTSB” – which was then launched in February 2022.
“Over four months in the spring of 2021, over 70 experts participated in a (virtual) workshop on the concept of creating a “Cyber NTSB”. The workshop was funded by the National Science Foundation with additional support from the Hewlett Foundation, and organized by Harvard’s Belfer Center with support from Northeastern University’s Global Resilience Institute.
The first call for the creation of a Cyber NTSB was in 1991. Since that time, many practitioners and policymakers have invoked the analogy, but little has been done to develop the concept. This workshop was carried out with the goal of moving the concept forward.”
We have a choice: A timeline of 1991 through 2022 (30 years) or March 2021 through February 2021 (less than a year) for the creation of an AI Review Board.
https://oodaloop.com/archive/2021/11/15/cybersecurity-and-cyber-incidents-innovation-and-design-lessons-from-aviation-safety-models-and-a-call-for-a-cyber-ntsb/
https://oodaloop.com/archive/2022/02/03/cyber-safety-review-board-launched-by-dhs/
The Cyber Safety Review Board was established pursuant to President Biden’s Executive Order (EO) 14028 on ‘Improving the Nation’s Cybersecurity‘. The Board serves a deliberate function to review major cyber events and make concrete recommendations that would drive improvements within the private and public sectors. The Board’s construction is a unique and valuable collaboration of government and private sector members and provides a direct path to the Secretary of Homeland Security and the President to ensure the recommendations are addressed and implemented, as appropriate. As a uniquely constituted advisory body, the Board will focus on learning lessons and sharing them with those that need them to enable advances in national cybersecurity. (1)
In July 2022, the CSRB released its first report. See https://www.dhs.gov/news/2022/07/14/cyber-safety-review-board-releases-report-its-review-log4j-vulnerabilities-and
For a pdf of the report: https://www.cisa.gov/sites/default/files/publications/CSRB-Report-on-Log4-July-11-2022_508.pdf
For next steps in ensuring your business is approaching AI with risk mitigation in mind.Artificial Intelligence for Business Advantage.
Looking for a primer on what executives need to know about real AI and ML? See A Decision-Maker’s Guide to Artificial Intelligence.
https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/
https://oodaloop.com/archive/2023/05/28/when-artificial-intelligence-goes-wrong-2/
https://oodaloop.com/archive/2023/06/14/improving-mission-impact-with-a-culture-that-drives-adoption/
https://oodaloop.com/ooda-original/2023/04/26/the-cybersecurity-implications-of-chatgpt-and-enabling-secure-enterprise-use-of-large-language-models/
https://oodaloop.com/archive/2023/05/08/the-ooda-network-on-the-real-danger-of-ai-innovation-at-exponential-speed-and-scale-and-not-adequately-addressing-ai-governance/
https://oodaloop.com/archive/2023/03/26/march-2023-ooda-network-member-meeting-tackled-strategy-misinforming-regulation-systemic-failure-and-the-emergence-of-new-risks/
https://oodaloop.com/archive/2023/02/01/nist-makes-available-the-voluntary-artificial-intelligence-risk-management-framework-ai-rmf-1-0-and-the-ai-rmf-playbook/
https://oodaloop.com/archive/2022/07/05/ai-ml-enabled-systems-for-strategy-and-judgment-and-the-future-of-human-computer-data-interaction-design/