Start your day with intelligence. Get The OODA Daily Pulse.

NIST Wants Your Input Now on Two Strategic Challenges: Adversarial Machine Learning and Protecting Controlled Unclassified Information (CUI)

OODA CEO Matt Devost featured the advent of adversarial machine learning in his keynote at OODAcon 2022. In recent conversations with OODA Senior Advisor and Network Member Chris Ward, we discussed the challenges inherent in protecting Controlled Unclassified Information (CUI) in nonfederal systems and organizations.

NIST has working drafts available for public comment that address both strategic challenges.  At the very least, take a look at this post and ask the question we are asking:  Is NIST asking the right questions in its research efforts?  Or, if these working drafts speak to your organization’s core competencies or address risk awareness and strategy in a really compelling manner, consider participating in the open comment period on one or both research efforts.  

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations | NIST Draft Available for Comment

“The National Institute of Standards and Technology has started soliciting public comments on an initial draft report on the terminology in the field of adversarial machine learning and taxonomy of mitigations and attacks.

NIST said Wednesday the terminology and the taxonomy seek to inform future practice guides and other standards for evaluating and managing the security of artificial intelligence systems through the establishment of a common language.

The agency is seeking insights on the latest attacks that threaten the current landscape of AI models, latest mitigations that are likely to withstand the test of time and new terminology that needs standardization.

Interested stakeholders could also share about the latest trends in AI technologies that intend to transform the industry or society, potential vulnerabilities related to such technologies and possible mitigations that could be developed to address such vulnerabilities.” (1)

The initial public draft of NIST AI 100-2 (2003 edition), Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, is now available for public comment.

This NIST report on artificial intelligence (AI) develops a taxonomy of attacks and mitigations and defines terminology in the field of adversarial machine learning (AML). Taken together, the taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems by establishing a common language for understanding the rapidly developing AML landscape. Future updates to the report will likely be released as attacks, mitigations, and terminology evolve.

NIST is specifically interested in comments on and recommendations for the following topics:

  • What are the latest attacks that threaten the existing landscape of AI models?
  • What are the latest mitigations that are likely to withstand the test of time?
  • What are the latest trends in AI technologies that promise to transform the industry/society? What potential vulnerabilities do they come with? What promising mitigations may be developed for them?
  • Is there new terminology that needs standardization?

The public comment period for this draft is open through September 30, 2023. See the publication details for a copy of the draft and instructions for submitting comments. NIST intends to keep the document open for comments for an extended period of time to engage with stakeholders and invite contributions to an up-to-date taxonomy that serves the needs of the public. 

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations – White Paper NIST AI 100-2e2023 (Draft)

Abstract

This NIST AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is built on survey of the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and lifecycle stage of attack, attacker goals and objectives, and attacker capabilities and knowledge of the learning process. The report also provides corresponding methods for mitigating and managing the consequences of attacks and points out relevant open challenges to take into account in the lifecycle of AI systems. The terminology used in the report is consistent with the literature on AML and is complemented by a glossary that defines key terms associated with the security of AI systems and is intended to assist non-expert readers. Taken together, the taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems, by establishing a common language and understanding of the rapidly developing AML landscape. (2)

For more OODA Loop News Briefs and Original Analysis, go to Adversarial Machine Learning | OODA Loop
 

Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations

Date Published: May 10, 2023
Comments Due: July 14, 2023
Email Comments to: [email protected]

Author(s)

Ron Ross (NIST)Victoria Pillitteri (NIST)

Abstract

Announcement

This update to NIST SP 800-171 represents over one year of data collection, technical analyses, customer interaction, redesign, and development of the security requirements and supporting information for the protection of Controlled Unclassified Information (CUI). Many trade-offs have been made to ensure that the technical and non-technical requirements have been stated clearly and concisely while also recognizing the specific needs of both federal and nonfederal organizations.

Significant changes NIST SP 800-171, Revision 3 include:

  1. Updates to the security requirements and families to reflect updates in NIST SP 800-53, Revision 5 and the NIST SP 800-53B moderate control baseline
  2. Updated tailoring criteria
  3. Increased specificity for security requirements to remove ambiguity, improve the effectiveness of implementation, and clarify the scope of assessments
  4. Introduction of organization-defined parameters (ODP) in selected security requirements to increase flexibility and help organizations better manage risk
  5. A prototype CUI overlay

Additional files include an FAQ, a detailed analysis of the changes between Revision 2 and Revision 3, and a prototype CUI Overlay.

NIST will also host a webinar on June 6, 2023 to provide an overview of the significant changes to SP 800-171, Revision 3. Registration information will be announced separately through a GovDelivery announcement and on the Protecting CUI project site.
 

Submit Your Comments

The public comment period is open now through July 14, 2023. We strongly encourage you to use this comment template if possible, and submit it to [email protected].

Reviewers are encouraged to comment on all or parts of draft NIST SP 800-171, Revision 3. NIST is specifically interested in comments, feedback, and recommendations for the following topics:

  • Re-categorized controls (e.g., controls formerly categorized as NFO)
  • Inclusion of organization-defined parameters (ODP)
  • Prototype CUI overlay

Comments received in response to this request will be posted on the Protecting CUI project site after the due date. Submitters’ names and affiliations (when provided) will be included, while contact information will be removed.

Please direct questions and comments to [email protected].

NOTE: A call for patent claims is included on page ii of this draft. For additional information, see the Information Technology Laboratory (ITL) Patent Policy  Inclusion of Patents in ITL Publications.

Protecting Controlled Unclassified Information (CUI)

Protecting Controlled Unclassified Information (CUI) in nonfederal systems and organizations is critical to federal agencies. The suite of guidance (NIST Special Publication (SP) 800-171, SP 800-171A, SP 800-172, and SP 800-172A) focuses on protecting the confidentiality of CUI and recommends specific security requirements to achieve that objective.

Recent Updates

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.