Start your day with intelligence. Get The OODA Daily Pulse.
This Cybersecurity Information Sheet (CSI) is the first of its kind release from the NSA Artificial Intelligence Security Center (AISC) – “intended to support National Security System owners and Defense Industrial Base companies that will be deploying and operating AI systems designed and developed by an external entity…while intended for national security purposes, the guidance has application for anyone bringing AI capabilities into a managed environment” across all industry sectors.
“The AISC plans to work with global partners to develop a series of guidance on AI security topics as the field evolves…”
“Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems” – is the first release from NSA’s Artificial Intelligence Security Center (AISC), in partnership with:
While intended for national security purposes, the guidance has application for anyone bringing AI capabilities into a managed environment, especially those in high-threat, high-value environments. It builds upon the previously released:
This is the first guidance led by the Artificial Intelligence Security Center (AISC) and postures the center to support one of its central goals: improving the confidentiality, integrity, and availability of AI systems.
NSA established the AISC in September of 2023 as a part of the Cybersecurity Collaboration Center (CCC). The AISC was formed to detect and counter AI vulnerabilities; drive partnerships with industry and experts from U.S. industry, national labs, academia, the IC, the DoD, and select foreign partners; develop and promote AI security best practices; and ensure NSA’s ability to stay in front of adversaries’ tactics and techniques.
The AISC plans to work with global partners to develop a series of guidance on AI security topics as the field evolves, such as on data security, content authenticity, model security, identity management, model testing and red teaming, incident response, and recovery.
“Attack vectors unique to AI may attract malicious actors on the hunt for sensitive data or intellectual property, the NSA warned.”
As reported at Cybersecurity Dive:
The rapid adoption of artificial intelligence tools is potentially making them “highly valuable” targets for malicious cyber actors, the National Security Agency warned in a recent report.
Bad actors looking to steal sensitive data or intellectual property may seek to “co-opt” an organization’s AI systems to achieve, according to the report. The NSA recommends organizations adopt defensive measures such as promoting a “security-aware” culture to minimize the risk of human error and ensuring the organization’s AI systems are hardened to avoid security gaps and vulnerabilities.
“AI brings unprecedented opportunity, but also can present opportunities for malicious activity,” NSA Cybersecurity Director Dave Luber said in a press release.
The report comes amid growing concerns about potential abuses of AI technologies, particularly generative AI, including the Microsoft-backed OpenAI’s wildly popular ChatGPT model.
In February, OpenAI said in a blog post that it terminated the accounts of five state-affiliated threat groups who were using the startup’s large language models to lay the groundwork for malicious hacking efforts. The company acted in collaboration with Microsoft threat researchers. “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a separate blog post. “On the defender side, hardening these same security controls from attacks and implementing equally sophisticated monitoring that anticipates and blocks malicious activity is vital.”
The threat activity uncovered by OpenAI and Microsoft could just be a precursor for state-linked and criminal groups to rapidly deploy generative AI to strengthen their attack capabilities, cybersecurity and AI analysts told Cybersecurity Dive.
“Organizations should consider the following best practices to secure the deployment environment, continuously protect the AI system, and securely operate and maintain the AI system.”
Deploying artificial intelligence (AI) systems securely requires careful setup and configuration that depends on the complexity of the AI system, the resources required (e.g., funding, technical expertise), and the infrastructure used (i.e., on-premises, cloud, or hybrid). This report expands upon the ‘secure deployment’ and ‘secure operation and maintenance’ sections of the Guidelines for secure AI system development and incorporates mitigation considerations from Engaging with Artificial Intelligence (AI). It is for organizations deploying and operating AI systems designed and developed by another entity. The best practices may not be applicable to all environments, so the mitigations should be adapted to specific use cases and threat profiles.
AI security is a rapidly evolving area of research. As agencies, industry, and academia discover potential weaknesses in AI technology and techniques to exploit them, organizations will need to update their AI systems to address the changing risks, in addition to applying traditional IT best practices to AI systems.
The goals of the AISC and the report are to:
The term AI systems throughout this report refers to machine learning (ML) based artificial intelligence (AI) systems. These best practices are most applicable to organizations deploying and operating externally developed AI systems on premises or in private cloud environments, especially those in high-threat, high-value environments. They are not applicable for organizations who are not deploying AI systems themselves and instead are leveraging AI systems deployed by others.
Not all of the guidelines will be directly applicable to all organizations or environments. The level of sophistication and the methods of attack will vary depending on the adversary targeting the AI system, so organizations should consider the guidance alongside their use cases and threat profile. See Guidelines for secure AI system development for design and development aspects of AI systems.
The rapid adoption, deployment, and use of AI capabilities can make them highly valuable targets for malicious cyber actors. Actors, who have historically used data theft of sensitive information and intellectual property to advance their interests, may seek to co-opt deployed AI systems and apply them to malicious ends.
Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT. Due to the large variety of attack vectors, defenses need to be diverse and comprehensive. Advanced malicious actors often combine multiple vectors to execute operations that are more complex. Such combinations can more effectively penetrate layered defenses.
Organizations should consider the following best practices to secure the deployment environment, continuously protect the AI system, and securely operate and maintain the AI system.
The best practices [outlined in this report] align with the cross-sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST). The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures. Visit CISA’s Cross-Sector Cybersecurity Performance Goals for more information on the CPGs, including additional recommended baseline protections.
For the full NSA AISC report, go to this link.
The future scenarios for adversarial artificial intelligence (AI) attacks against the information technology infrastructure and critical infrastructure of the United States are alarming and complex. As a result, it is essential to recognize that the integration of AI into our digital and physical infrastructures not only enhances efficiency but also introduces novel vulnerabilities that can be exploited by adversaries:
All told, the future scenarios for adversarial AI attacks are diverse and require a multi-faceted response strategy that includes technological innovation, regulatory frameworks, international cooperation, and ethical governance.
For more OODA Loop News Briefs and Original Analysis, see
Perspectives on AI Hallucinations: Code Libraries and Developer Ecosystems: Our hypothesis on AI Hallucinations is based on a quote from the OODAcon 2024 panel, “The Next Generative AI Surprise: “Artificial Intelligence hallucinations may sometimes provide an output that is a very creative interpretation of something or an edge case that proves useful.” With that framing in mind, the following is the first installment in our survey of differing perspectives on the threats and opportunities created by AI hallucinations.
The Next Generative AI Surprise: At the OODAcon 2022 conference, we predicted that ChatGPT would take the business world by storm and included an interview with OpenAI Board Member and former Congressman Will Hurd. Today, thousands of businesses are being disrupted or displaced by generative AI. This topic was further examined at length at OODAcon 2023, taking a closer look at this innovation and its impact on business, society, and international politics. The following are insights from an OODAcon 2023 discussion between Pulkit Jaiswal, Co-Founder of NWO.ai, and Bob Flores, former CTO of the CIA.
What Can Your Organization Learn from the Use Cases of Large Language Models in Medicine and Healthcare?: It has become conventional wisdom that biotech and healthcare are the pace cars in implementing AI use cases with innovative business models and value-creation mechanisms. Other industry sectors should keep a close eye on the critical milestones and pitfalls of the biotech/healthcare space – with an eye toward what platform, product, service innovations, and architectures may have a potable value proposition within your industry. The Stanford Institute for Human-Centered AI (HAI) is doing great work fielding research in medicine and healthcare environments with quantifiable results that offer a window into AI as a general applied technology during this vast but shallow early implementation phase across all industry sectors of “AI for the enterprise.” Details here.The Origins Story and the Future Now of Generative AI: This book explores generative artificial intelligence’s fast-moving impacts and exponential capabilities over just one year.
Generative AI – Socio-Technological Risks, Potential Impacts, Market Dynamics, and Cybersecurity Implications: The risks, potential positive and negative impacts, market dynamics, and security implications of generative AI have emerged – slowly, then rapidly, as the unprecedented hype cycle around artificial intelligence settled into a more pragmatic stoicism – with project deployments – throughout 2023.
In the Era of Code, Generative AI Represents National Security Risks and Opportunities for “Innovation Power”: We are entering the Era of Code. Code that writes code and code that breaks code. Code that talks to us and code that talks for us. Code that predicts and code that decides. Code that rewrites us. Organizations and individuals prioritizing understanding how the Code Era impacts them will develop increasing advantages in the future. At OODAcon 2023, we will be taking a closer look at Generative AI innovation and its impact on business, society, and international politics. IQT and the Special Competitive Studies Project (SCSP) recently weighed in on this Generative AI “spark” of innovation that will “enhance all elements of our innovation power” – and the potential cybersecurity conflagrations that that same spark may also light. Details here.
Corporate Board Accountability for Cyber Risks: With a combination of market forces, regulatory changes, and strategic shifts, corporate boards and directors are now accountable for cyber risks in their firms. See: Corporate Directors and Risk
Geopolitical-Cyber Risk Nexus: The interconnectivity brought by the Internet has caused regional issues that affect global cyberspace. Now, every significant event has cyber implications, making it imperative for leaders to recognize and act upon the symbiosis between geopolitical and cyber risks. See The Cyber Threat
Ransomware’s Rapid Evolution: Ransomware technology and its associated criminal business models have seen significant advancements. This has culminated in a heightened threat level, resembling a pandemic’s reach and impact. Yet, there are strategies available for threat mitigation. See: Ransomware, and update.
Challenges in Cyber “Net Assessment”: While leaders have long tried to gauge both cyber risk and security, actionable metrics remain elusive. Current metrics mainly determine if a system can be compromised without guaranteeing its invulnerability. It’s imperative not just to develop action plans against risks but to contextualize the state of cybersecurity concerning cyber threats. Despite its importance, achieving a reliable net assessment is increasingly challenging due to the pervasive nature of modern technology. See: Cyber Threat
Decision Intelligence for Optimal Choices: Numerous disruptions complicate situational awareness and can inhibit effective decision-making. Every enterprise should evaluate its data collection methods, assessment, and decision-making processes for more insights: Decision Intelligence.
Proactive Mitigation of Cyber Threats: The relentless nature of cyber adversaries, whether they are criminals or nation-states, necessitates proactive measures. It’s crucial to remember that cybersecurity isn’t solely the IT department’s or the CISO’s responsibility – it’s a collective effort involving the entire leadership. Relying solely on governmental actions isn’t advised given its inconsistent approach towards aiding industries in risk reduction. See: Cyber Defenses
The Necessity of Continuous Vigilance in Cybersecurity: The consistent warnings from the FBI and CISA concerning cybersecurity signal potential large-scale threats. Cybersecurity demands 24/7 attention, even on holidays. Ensuring team endurance and preventing burnout by allocating rest periods are imperative. See: Continuous Vigilance
Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront unpredictable external threats. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. Regardless of their size, all organizations should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning
“The AISC plans to work with global partners to develop a series of guidance on AI security topics as the field evolves…”
“Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems” – is the first release from NSA’s Artificial Intelligence Security Center (AISC), in partnership with:
While intended for national security purposes, the guidance has application for anyone bringing AI capabilities into a managed environment, especially those in high-threat, high-value environments. It builds upon the previously released:
This is the first guidance led by the Artificial Intelligence Security Center (AISC) and postures the center to support one of its central goals: improving the confidentiality, integrity, and availability of AI systems.
NSA established the AISC in September of 2023 as a part of the Cybersecurity Collaboration Center (CCC). The AISC was formed to detect and counter AI vulnerabilities; drive partnerships with industry and experts from U.S. industry, national labs, academia, the IC, the DoD, and select foreign partners; develop and promote AI security best practices; and ensure NSA’s ability to stay in front of adversaries’ tactics and techniques.
The AISC plans to work with global partners to develop a series of guidance on AI security topics as the field evolves, such as on data security, content authenticity, model security, identity management, model testing and red teaming, incident response, and recovery.
“Attack vectors unique to AI may attract malicious actors on the hunt for sensitive data or intellectual property, the NSA warned.”
As reported at Cybersecurity Dive:
The rapid adoption of artificial intelligence tools is potentially making them “highly valuable” targets for malicious cyber actors, the National Security Agency warned in a recent report.
Bad actors looking to steal sensitive data or intellectual property may seek to “co-opt” an organization’s AI systems to achieve, according to the report. The NSA recommends organizations adopt defensive measures such as promoting a “security-aware” culture to minimize the risk of human error and ensuring the organization’s AI systems are hardened to avoid security gaps and vulnerabilities.
“AI brings unprecedented opportunity, but also can present opportunities for malicious activity,” NSA Cybersecurity Director Dave Luber said in a press release.
The report comes amid growing concerns about potential abuses of AI technologies, particularly generative AI, including the Microsoft-backed OpenAI’s wildly popular ChatGPT model.
In February, OpenAI said in a blog post that it terminated the accounts of five state-affiliated threat groups who were using the startup’s large language models to lay the groundwork for malicious hacking efforts. The company acted in collaboration with Microsoft threat researchers. “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a separate blog post. “On the defender side, hardening these same security controls from attacks and implementing equally sophisticated monitoring that anticipates and blocks malicious activity is vital.”
The threat activity uncovered by OpenAI and Microsoft could just be a precursor for state-linked and criminal groups to rapidly deploy generative AI to strengthen their attack capabilities, cybersecurity and AI analysts told Cybersecurity Dive.
“Organizations should consider the following best practices to secure the deployment environment, continuously protect the AI system, and securely operate and maintain the AI system.”
Deploying artificial intelligence (AI) systems securely requires careful setup and configuration that depends on the complexity of the AI system, the resources required (e.g., funding, technical expertise), and the infrastructure used (i.e., on-premises, cloud, or hybrid). This report expands upon the ‘secure deployment’ and ‘secure operation and maintenance’ sections of the Guidelines for secure AI system development and incorporates mitigation considerations from Engaging with Artificial Intelligence (AI). It is for organizations deploying and operating AI systems designed and developed by another entity. The best practices may not be applicable to all environments, so the mitigations should be adapted to specific use cases and threat profiles.
AI security is a rapidly evolving area of research. As agencies, industry, and academia discover potential weaknesses in AI technology and techniques to exploit them, organizations will need to update their AI systems to address the changing risks, in addition to applying traditional IT best practices to AI systems.
The goals of the AISC and the report are to:
The term AI systems throughout this report refers to machine learning (ML) based artificial intelligence (AI) systems. These best practices are most applicable to organizations deploying and operating externally developed AI systems on premises or in private cloud environments, especially those in high-threat, high-value environments. They are not applicable for organizations who are not deploying AI systems themselves and instead are leveraging AI systems deployed by others.
Not all of the guidelines will be directly applicable to all organizations or environments. The level of sophistication and the methods of attack will vary depending on the adversary targeting the AI system, so organizations should consider the guidance alongside their use cases and threat profile. See Guidelines for secure AI system development for design and development aspects of AI systems.
The rapid adoption, deployment, and use of AI capabilities can make them highly valuable targets for malicious cyber actors. Actors, who have historically used data theft of sensitive information and intellectual property to advance their interests, may seek to co-opt deployed AI systems and apply them to malicious ends.
Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT. Due to the large variety of attack vectors, defenses need to be diverse and comprehensive. Advanced malicious actors often combine multiple vectors to execute operations that are more complex. Such combinations can more effectively penetrate layered defenses.
Organizations should consider the following best practices to secure the deployment environment, continuously protect the AI system, and securely operate and maintain the AI system.
The best practices [outlined in this report] align with the cross-sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST). The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures. Visit CISA’s Cross-Sector Cybersecurity Performance Goals for more information on the CPGs, including additional recommended baseline protections.
For the full NSA AISC report, go to this link.
The future scenarios for adversarial artificial intelligence (AI) attacks against the information technology infrastructure and critical infrastructure of the United States are alarming and complex. As a result, it is essential to recognize that the integration of AI into our digital and physical infrastructures not only enhances efficiency but also introduces novel vulnerabilities that can be exploited by adversaries:
All told, the future scenarios for adversarial AI attacks are diverse and require a multi-faceted response strategy that includes technological innovation, regulatory frameworks, international cooperation, and ethical governance.
For more OODA Loop News Briefs and Original Analysis, see:
Perspectives on AI Hallucinations: Code Libraries and Developer Ecosystems: Our hypothesis on AI Hallucinations is based on a quote from the OODAcon 2024 panel, “The Next Generative AI Surprise: “Artificial Intelligence hallucinations may sometimes provide an output that is a very creative interpretation of something or an edge case that proves useful.” With that framing in mind, the following is the first installment in our survey of differing perspectives on the threats and opportunities created by AI hallucinations.
The Next Generative AI Surprise: At the OODAcon 2022 conference, we predicted that ChatGPT would take the business world by storm and included an interview with OpenAI Board Member and former Congressman Will Hurd. Today, thousands of businesses are being disrupted or displaced by generative AI. This topic was further examined at length at OODAcon 2023, taking a closer look at this innovation and its impact on business, society, and international politics. The following are insights from an OODAcon 2023 discussion between Pulkit Jaiswal, Co-Founder of NWO.ai, and Bob Flores, former CTO of the CIA.
What Can Your Organization Learn from the Use Cases of Large Language Models in Medicine and Healthcare?: It has become conventional wisdom that biotech and healthcare are the pace cars in implementing AI use cases with innovative business models and value-creation mechanisms. Other industry sectors should keep a close eye on the critical milestones and pitfalls of the biotech/healthcare space – with an eye toward what platform, product, service innovations, and architectures may have a potable value proposition within your industry. The Stanford Institute for Human-Centered AI (HAI) is doing great work fielding research in medicine and healthcare environments with quantifiable results that offer a window into AI as a general applied technology during this vast but shallow early implementation phase across all industry sectors of “AI for the enterprise.” Details here.The Origins Story and the Future Now of Generative AI: This book explores generative artificial intelligence’s fast-moving impacts and exponential capabilities over just one year.
Generative AI – Socio-Technological Risks, Potential Impacts, Market Dynamics, and Cybersecurity Implications: The risks, potential positive and negative impacts, market dynamics, and security implications of generative AI have emerged – slowly, then rapidly, as the unprecedented hype cycle around artificial intelligence settled into a more pragmatic stoicism – with project deployments – throughout 2023.
In the Era of Code, Generative AI Represents National Security Risks and Opportunities for “Innovation Power”: We are entering the Era of Code. Code that writes code and code that breaks code. Code that talks to us and code that talks for us. Code that predicts and code that decides. Code that rewrites us. Organizations and individuals prioritizing understanding how the Code Era impacts them will develop increasing advantages in the future. At OODAcon 2023, we will be taking a closer look at Generative AI innovation and its impact on business, society, and international politics. IQT and the Special Competitive Studies Project (SCSP) recently weighed in on this Generative AI “spark” of innovation that will “enhance all elements of our innovation power” – and the potential cybersecurity conflagrations that that same spark may also light. Details here.
Corporate Board Accountability for Cyber Risks: With a combination of market forces, regulatory changes, and strategic shifts, corporate boards and directors are now accountable for cyber risks in their firms. See: Corporate Directors and Risk
Geopolitical-Cyber Risk Nexus: The interconnectivity brought by the Internet has caused regional issues that affect global cyberspace. Now, every significant event has cyber implications, making it imperative for leaders to recognize and act upon the symbiosis between geopolitical and cyber risks. See The Cyber Threat
Ransomware’s Rapid Evolution: Ransomware technology and its associated criminal business models have seen significant advancements. This has culminated in a heightened threat level, resembling a pandemic’s reach and impact. Yet, there are strategies available for threat mitigation. See: Ransomware, and update.
Challenges in Cyber “Net Assessment”: While leaders have long tried to gauge both cyber risk and security, actionable metrics remain elusive. Current metrics mainly determine if a system can be compromised without guaranteeing its invulnerability. It’s imperative not just to develop action plans against risks but to contextualize the state of cybersecurity concerning cyber threats. Despite its importance, achieving a reliable net assessment is increasingly challenging due to the pervasive nature of modern technology. See: Cyber Threat
Decision Intelligence for Optimal Choices: Numerous disruptions complicate situational awareness and can inhibit effective decision-making. Every enterprise should evaluate its data collection methods, assessment, and decision-making processes for more insights: Decision Intelligence.
Proactive Mitigation of Cyber Threats: The relentless nature of cyber adversaries, whether they are criminals or nation-states, necessitates proactive measures. It’s crucial to remember that cybersecurity isn’t solely the IT department’s or the CISO’s responsibility – it’s a collective effort involving the entire leadership. Relying solely on governmental actions isn’t advised given its inconsistent approach towards aiding industries in risk reduction. See: Cyber Defenses
The Necessity of Continuous Vigilance in Cybersecurity: The consistent warnings from the FBI and CISA concerning cybersecurity signal potential large-scale threats. Cybersecurity demands 24/7 attention, even on holidays. Ensuring team endurance and preventing burnout by allocating rest periods are imperative. See: Continuous Vigilance
Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront unpredictable external threats. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. Regardless of their size, all organizations should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning