Start your day with intelligence. Get The OODA Daily Pulse.
OODA CEO Matt Devost featured the advent of adversarial machine learning in his keynote at OODAcon 2022. In recent conversations with OODA Senior Advisor and Network Member Chris Ward, we discussed the challenges inherent in protecting Controlled Unclassified Information (CUI) in nonfederal systems and organizations.
NIST has working drafts available for public comment that address both strategic challenges. At the very least, take a look at this post and ask the question we are asking: Is NIST asking the right questions in its research efforts? Or, if these working drafts speak to your organization’s core competencies or address risk awareness and strategy in a really compelling manner, consider participating in the open comment period on one or both research efforts.
“The National Institute of Standards and Technology has started soliciting public comments on an initial draft report on the terminology in the field of adversarial machine learning and taxonomy of mitigations and attacks.
NIST said Wednesday the terminology and the taxonomy seek to inform future practice guides and other standards for evaluating and managing the security of artificial intelligence systems through the establishment of a common language.
The agency is seeking insights on the latest attacks that threaten the current landscape of AI models, latest mitigations that are likely to withstand the test of time and new terminology that needs standardization.
Interested stakeholders could also share about the latest trends in AI technologies that intend to transform the industry or society, potential vulnerabilities related to such technologies and possible mitigations that could be developed to address such vulnerabilities.” (1)
The initial public draft of NIST AI 100-2 (2003 edition), Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, is now available for public comment.
This NIST report on artificial intelligence (AI) develops a taxonomy of attacks and mitigations and defines terminology in the field of adversarial machine learning (AML). Taken together, the taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems by establishing a common language for understanding the rapidly developing AML landscape. Future updates to the report will likely be released as attacks, mitigations, and terminology evolve.
NIST is specifically interested in comments on and recommendations for the following topics:
The public comment period for this draft is open through September 30, 2023. See the publication details for a copy of the draft and instructions for submitting comments. NIST intends to keep the document open for comments for an extended period of time to engage with stakeholders and invite contributions to an up-to-date taxonomy that serves the needs of the public.
This NIST AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is built on survey of the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and lifecycle stage of attack, attacker goals and objectives, and attacker capabilities and knowledge of the learning process. The report also provides corresponding methods for mitigating and managing the consequences of attacks and points out relevant open challenges to take into account in the lifecycle of AI systems. The terminology used in the report is consistent with the literature on AML and is complemented by a glossary that defines key terms associated with the security of AI systems and is intended to assist non-expert readers. Taken together, the taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems, by establishing a common language and understanding of the rapidly developing AML landscape. (2)
Date Published: May 10, 2023
Comments Due: July 14, 2023
Email Comments to: [email protected]
This update to NIST SP 800-171 represents over one year of data collection, technical analyses, customer interaction, redesign, and development of the security requirements and supporting information for the protection of Controlled Unclassified Information (CUI). Many trade-offs have been made to ensure that the technical and non-technical requirements have been stated clearly and concisely while also recognizing the specific needs of both federal and nonfederal organizations.
Significant changes NIST SP 800-171, Revision 3 include:
Additional files include an FAQ, a detailed analysis of the changes between Revision 2 and Revision 3, and a prototype CUI Overlay.
The public comment period is open now through July 14, 2023. We strongly encourage you to use this comment template if possible, and submit it to [email protected].
Reviewers are encouraged to comment on all or parts of draft NIST SP 800-171, Revision 3. NIST is specifically interested in comments, feedback, and recommendations for the following topics:
Comments received in response to this request will be posted on the Protecting CUI project site after the due date. Submitters’ names and affiliations (when provided) will be included, while contact information will be removed.
Please direct questions and comments to [email protected].
NOTE: A call for patent claims is included on page ii of this draft. For additional information, see the Information Technology Laboratory (ITL) Patent Policy – Inclusion of Patents in ITL Publications.
Protecting Controlled Unclassified Information (CUI) in nonfederal systems and organizations is critical to federal agencies. The suite of guidance (NIST Special Publication (SP) 800-171, SP 800-171A, SP 800-172, and SP 800-172A) focuses on protecting the confidentiality of CUI and recommends specific security requirements to achieve that objective.