Start your day with intelligence. Get The OODA Daily Pulse.
“The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has released its Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies. The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors. It is intended to adapt to the AI landscape as technologies continue to develop and to be used by organizations in varying degrees and capacities so that society can benefit from AI technologies while also being protected from its potential harms:
“Compared with traditional software, AI poses a number of different risks. AI systems are trained on data that can change over time, sometimes significantly and unexpectedly, affecting the systems in ways that can be difficult to understand. These systems are also “socio-technical” in nature, meaning they are influenced by societal dynamics and human behavior. AI risks can emerge from the complex interplay of these technical and societal factors, affecting people’s lives in situations ranging from their experiences with online chatbots to the results of job and loan applications.
The AI RMF is divided into two parts. The first part discusses how organizations can frame the risks related to AI and outlines the characteristics of trustworthy AI systems. The second part, the core of the framework, describes four specific functions — govern, map, measure and manage — to help organizations address the risks of AI systems in practice. These functions can be applied in context-specific use cases and at any stages of the AI life cycle. (1)
Alexandra Kelley from NextGov.com attended the launch event for the framework, offering a quick breakdown of the framework and industry response from some of the AI startups in attendance:
“The framework offers four interrelated functions as a risk mitigation method: govern, map, measure, and manage.
Govern sits at the core of the RMF’s mitigation strategy, and is intended to serve as a foundational culture of risk prevention and management bed rocking for any organization using the RMF.
Map comes next in the RMF game plan. This step works to contextualize potential risks in AI technology, and broadly identify the positive mission and uses of any given AI system, while simultaneously taking into account its limitations.
This context should then allow framework users to Measure how an AI system actually functions. Crucial to the “Measure” component is employing sufficient metrics that represent universal scientific and ethical norms. Strong measuring is then applied through “rigorous” software testing, further analyzed by external experts and user feedback. “Potential pitfalls when seeking to measure negative risk or harms include the reality that development of metrics is often an institutional endeavor and may inadvertently reflect factors unrelated to the underlying impact,” the report cautions. “Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI configurations.”
The final step in the AI RMF mitigation strategy is Manage, whose main function is to allocate risk mitigation resources and ensure that previously established mechanisms are continuously implemented.
‘Framework users will enhance their capacity to comprehensively evaluate system trustworthiness, identify and track existing and emergent risks and verify the efficacy of the metrics,’ the report states.
Business owners participating in the AI RMF also expressed optimism at the framework’s guidance. Navrina Singh, the CEO of AI startup Credo.AI and member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee, said that customers seeking AI solutions want more holistic plans to mitigate bias. ‘Most of our customers…are really looking for a mechanism to build capacity around operationalizing responsible AI, which has done really well in the Govern function of the NIST AI RMF, she said during a panel following the RMF release. ‘The ‘Map, Measure, Manage’ components and how they can be actualized in a contextual way, in all these specific use cases within these organizations, is the next step that most of our customers are looking to take.’”
The AI RMF Playbook suggests ways to navigate and use the AI Risk Management Framework (AI RMF) to incorporate trustworthiness considerations in the design, development, deployment and use of AI systems. The current draft Playbook is based on AI RMF 1.0 (released on January 26, 2023) and includes suggested actions, references, and documentation guidance to achieve the outcomes for the four functions in the AI RMF: Govern, Map, Measure, and Manage. Playbook suggestions are developed based on best practices and research insights. Send us your feedback either as comments or as specific line-edit additions or modifications to [email protected] through February 27, 2023. A revised version will be posted in the Spring of 2023.
Playbook content is not considered final and is being released to enable community review and feedback about its informativeness, accuracy, and specificity. Aspects related to the presentation and delivery of Playbook suggestions are under development. Future online versions may include options for filtering or tailoring information to user preferences and requirements. The Playbook is an online resource and will be hosted temporarily on GitHub Pages. Its permanent home will be in the NIST Trustworthy and Responsible AI Resource Center. The AI Risk Management Framework (AI RMF 1.0) and this companion Playbook are intended for voluntary use.
The content has been developed considering a range of applications and risk levels. Playbook users are expected to exercise discretion and utilize as many – or as few – suggestions, as are appropriate and applicable to their use cases or interests. Certain elements of the guidance may not be applicable in various contexts, including in low-risk implementations.
We encourage OODA Network Members and the OODA Loop readership to kick the tires of both the AI RMF and the AI RMF Playbook, as NIST is throwing a broad community building. collective intelligence net on this effort, “working closely with the private and public sectors, NIST has been developing the AI RMF for 18 months. The document reflects about 400 sets of formal comments NIST received from more than 240 different organizations on draft versions of the framework. NIST today released statements from some of the organizations that have already committed to use or promote the framework.” (1)
Ongoing collaborative outreach efforts and further resources provided by NIST include:
“Community participation from a diverse group of sectors was critical to the development of the framework. Alondra Nelson, the Deputy Director for Science and Society at the White House Office of Science and Technology Policy, said that her office was one of the entities that gave NIST extensive input into the AI RMF 1.0. She added that the framework, like the White House AI Bill of Rights, puts the human experience and impact of AI algorithms first. (1)
‘The AI RMF acknowledges that when it comes to AI and machine learning algorithms, we can never consider a technology outside of the context of its impact on human beings,’ [NIST Director Laurie] Locascio said. ‘The United States is taking a principled, sophisticated approach to AI that advances American values and meets the complex challenges of this technology and we should be proud of that.’ Much like the AI Bill of Rights, NIST’s AI RMF is a voluntary framework, with no penalties or rewards associated with its adoption. Regardless, Locascio hopes that the framework will be widely utilized and asked for continued community feedback as the agency plans to issue an update this spring.
“We’re counting on the broad community to help us to refine these roadmap priorities and do a lot of heavy lifting that will be called for,” Locascio said. “We’re counting on you to put this AI RMF 1.0 into practice.” (2)