Start your day with intelligence. Get The OODA Daily Pulse.
In October, we included NATO’s release of its first-ever strategy for artificial intelligence in the OODA Loop Daily Pulse. The strategy is primarily concerned with the impact AI will have on the NATO core commitments of collective defense, crisis management, and cooperative security. Worth a deeper dive is a framework within the overall NATO AI Strategy, which mirrors that of the DoD Joint Artificial Intelligence Center’s (JAIC) and other U.S.-based efforts to establish norms around AI: “NATO establishes standards of responsible use of AI technologies, in accordance with international law and NATO’s values.”
At the center of the NATO AI strategy are the “NATO Principles of Responsible Use of Artificial Intelligence in Defence,” which are based on the NATO and Allies commitment to “ensuring that the AI applications they develop and consider for deployment will be – at the various stages of their lifecycles – in accordance with the following six principles: Lawfulness, Responsibility and Accountability, Explainability and Traceability, Reliability, Governability, and Bias Mitigation.”
These NATO principles mirror a JAIC AI ethical standards framework released in October of 2019.
Following is a table comparing the NATO and JAIC AI Principles:
Table 1: Comparison of NATO and US DoD JAIC Artificial Intelligence Principles
First, to clarify: we should not be concerned with the JAIC’s failure to explicitly lay out a principle for “lawfulness”, as the rule of law is implicit in policy documents generated by Departments of the USG. NATO, consistent with its charter, chose to make the explanation of lawfulness explicit so as to include applications “developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable.”
Otherwise, as these things go in terms of the operational role of the U.S. within NATO, it should not be surprising that the language is almost exactly the same – and that the NATO AI Principles were announced almost two years to the day after the announcement of AI Principles by the JAIC. The role of the U.S. in establishing these NATO AI principles has not been without pushback in the EU: apparently, the timing of the release of the NATO AI strategy is based on the pressure within the EU to make sure NATO, not the U.S.-led AI Partnership for Defense, is at the helm on issues of AI and the defense of Europe.
It is also interesting to be doing an analysis at a ‘core principles’ level, which is usually the express purview of intergovernmental military alliances, international peace and development organizations, and the physical sciences. We encountered at least one example of this ‘standards’ level approach applied to other global emerging technology initiatives.
This ‘declaration of standards” approach has made sense as a way of sorting out AI ethics issues during these early phases of the AI innovation and commercialization cycle. It is the necessary analysis in a discussion of the evolution of international AI norms between nation-states (partly because that is where they have started) and to distill how much overlap (and disconnect) there is with private sector efforts to determine AI guidelines and standards for new AI markets and platforms. For the private sector, the question now is: Are we there yet? Is there an agreed-upon ethical AI framework on which to build? Can we move on with confidence to the implementation and operations phases of our business strategies?
We are not alone in this analysis and line of questioning, as AI researchers have also been concerned with the level of agreement that exists worldwide on core AI principles. They want to be working from a specification too.
This post was penned while taking in a live/virtual presentation from the AI Governance Symposium at the Information Society Project, part of the Wikimedia/Yale Law School Initiative on Intermediaries and Information (WIII).
Baobao Zhang an Assistant Professor of Political Science at Syracuse University, presented today on “Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers” – a presentation which is a summary of a larger survey study of Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers:
The following abstract from the study aligns the NATO and JAIC AI principles with 84 documents from around the world:
“In the past five years, private companies, research institutions, and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards, and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analyzed the current corpus of principles and guidelines on ethical AI.
“Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility, and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain, or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.”
Zhang also reviewed really compelling slides from the updated study: “Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers.”
This cursory analysis of the NATO and JAIC documents, along with the results of this more exhaustive study of global ethical AI principles, provides a firm foundation on which to start fleshing out business strategies and operations (guidelines for AI implementation, etc.). At the very least, by using these principles, your organization ensures alignment with NATO and the JAIC to enhance eligibility and competitiveness for future AI strategic partnerships or AI procurement opportunities.
Now more than ever, organizations need to apply rigorous thought to business risks and opportunities. In doing so it is useful to understand the concepts embodied in the terms Black Swan and Gray Rhino. See: Potential Future Opportunities, Risks and Mitigation Strategies in the Age of Continuous Crisis
The OODA leadership and analysts have decades of experience in understanding and mitigating cybersecurity threats and apply this real world practitioner knowledge in our research and reporting. This page on the site is a repository of the best of our actionable research as well as a news stream of our daily reporting on cybersecurity threats and mitigation measures. See: Cybersecurity Sensemaking
OODA’s leadership and analysts have decades of direct experience helping organizations improve their ability to make sense of their current environment and assess the best courses of action for success going forward. This includes helping establish competitive intelligence and corporate intelligence capabilities. Our special series on the Intelligent Enterprise highlights research and reports that can accelerate any organization along their journey to optimized intelligence. See: Corporate Sensemaking
This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process. This includes a special guide for executives seeking to make the most of AI in their enterprise. See: Artificial Intelligence Sensemaking
From the very beginning of the pandemic we have focused on research on what may come next and what to do about it today. This section of the site captures the best of our reporting plus daily daily intelligence as well as pointers to reputable information from other sites. See: OODA COVID-19 Sensemaking Page.
A dynamic resource for OODA Network members looking for insights into the current and future developments in Space, including a special executive’s guide to space. See: Space Sensemaking
OODA is one of the few independent research sources with experience in due diligence on quantum computing and quantum security companies and capabilities. Our practitioner’s lens on insights ensures our research is grounded in reality. See: Quantum Computing Sensemaking.
In 2020, we launched the OODAcast video and podcast series designed to provide you with insightful analysis and intelligence to inform your decision making process. We do this through a series of expert interviews and topical videos highlighting global technologies such as cybersecurity, AI, quantum computing along with discussions on global risk and opportunity issues. See: The OODAcast