Start your day with intelligence. Get The OODA Daily Pulse.
Details of a recent report from the Treasury Department on AI-Specific Cybersecurity Risks in the Financial Sector.
As reported by CYBERSCOOP: “Going forward, the financial services sector relayed that it would be helpful to have “a common lexicon” on AI tools to aid in more productive discussions with third parties and regulators, ensuring that all stakeholders are speaking the same language. Report participants also said their firms would “benefit from the development of best practices concerning the mapping of data supply chains and data standards.”
The Treasury Department said it would work with the financial sector, as well as NIST, the Cybersecurity and Infrastructure Security Agency and the National Telecommunications and Information Administration to further discuss potential recommendations tied to those asks. In the coming months, Treasury officials will collaborate with industry, other agencies, international partners and federal and state financial sector regulators on critical initiatives tied to AI-related challenges in the sector.”
The ABA Banking Journal itemized “several next steps and opportunities,” including:
For more OODA Loop News Briefs and Original Analysis, see:
Related References:
Is the US Government Over-Regulating Artificial Intelligence? Maybe it is time to start thinking of ways to decontrol our AI.
Decontrol AI to Accelerate Solutions: In a previous post we asked “Is the US Government Over-Regulating Artificial Intelligence?” We followed it with some historical context and a summary of today’s government controls in “Regulations on Government Use of AI.” This post builds on those two to examine an example on how a decontrol mindset can help speed AI solutions into use.
Regulations on Government Use of AI: In a previous post in this series we raised a question: Is the US Government Over-Regulating AI? That post discussed the word “decontrol” as a concept whose time has come in discussion of AI use in the government. But how can we know if we are over regulated without an overview of the regulatory environment? Here we provide a short overview of the regulation of government IT with an eye towards shedding light on ways to accelerate AI through some decontrol.
Perspectives on AI Hallucinations: Code Libraries and Developer Ecosystems: Our hypothesis on AI Hallucinations is based on a quote from the OODAcon 2024 panel, “The Next Generative AI Surprise: “Artificial Intelligence hallucinations may sometimes provide an output that is a very creative interpretation of something or an edge case that proves useful.” With that framing in mind, the following is the first installment in our survey of differing perspectives on the threats and opportunities created by AI hallucinations.
The Next Generative AI Surprise: At the OODAcon 2022 conference, we predicted that ChatGPT would take the business world by storm and included an interview with OpenAI Board Member and former Congressman Will Hurd. Today, thousands of businesses are being disrupted or displaced by generative AI. This topic was further examined at length at OODAcon 2023, taking a closer look at this innovation and its impact on business, society, and international politics. The following are insights from an OODAcon 2023 discussion between Pulkit Jaiswal, Co-Founder of NWO.ai, and Bob Flores, former CTO of the CIA.
What Can Your Organization Learn from the Use Cases of Large Language Models in Medicine and Healthcare?: It has become conventional wisdom that biotech and healthcare are the pace cars in implementing AI use cases with innovative business models and value-creation mechanisms. Other industry sectors should keep a close eye on the critical milestones and pitfalls of the biotech/healthcare space – with an eye toward what platform, product, service innovations, and architectures may have a potable value proposition within your industry. The Stanford Institute for Human-Centered AI (HAI) is doing great work fielding research in medicine and healthcare environments with quantifiable results that offer a window into AI as a general applied technology during this vast but shallow early implementation phase across all industry sectors of “AI for the enterprise.” Details here.
Two Emergent and Sophisticated Approaches to LLM Implementation in Cybersecurity: Google Security Engineering and The Carnegie Mellon University Software Engineering Institute (in collaboration with OpenAI), have sorted through the hype – and done some serious thinking and formal research on developing “better approaches for evaluating LLM cybersecurity” and AI-powered patching: the future of automated vulnerability fixes. This is some great formative framing of the challenges ahead as we collectively sort out the implications of the convergence of generative AI and future cyber capabilities (offensive and defensive).The Origins Story and the Future Now of Generative AI: This book explores generative artificial intelligence’s fast-moving impacts and exponential capabilities over just one year.
Generative AI – Socio-Technological Risks, Potential Impacts, Market Dynamics, and Cybersecurity Implications: The risks, potential positive and negative impacts, market dynamics, and security implications of generative AI have emerged – slowly, then rapidly, as the unprecedented hype cycle around artificial intelligence settled into a more pragmatic stoicism – with project deployments – throughout 2023.
In the Era of Code, Generative AI Represents National Security Risks and Opportunities for “Innovation Power”: We are entering the Era of Code. Code that writes code and code that breaks code. Code that talks to us and code that talks for us. Code that predicts and code that decides. Code that rewrites us. Organizations and individuals prioritizing understanding how the Code Era impacts them will develop increasing advantages in the future. At OODAcon 2023, we will be taking a closer look at Generative AI innovation and its impact on business, society, and international politics. IQT and the Special Competitive Studies Project (SCSP) recently weighed in on this Generative AI “spark” of innovation that will “enhance all elements of our innovation power” – and the potential cybersecurity conflagrations that that same spark may also light. Details here.
Corporate Board Accountability for Cyber Risks: With a combination of market forces, regulatory changes, and strategic shifts, corporate boards and directors are now accountable for cyber risks in their firms. See: Corporate Directors and Risk
Geopolitical-Cyber Risk Nexus: The interconnectivity brought by the Internet has caused regional issues that affect global cyberspace. Now, every significant event has cyber implications, making it imperative for leaders to recognize and act upon the symbiosis between geopolitical and cyber risks. See The Cyber Threat
Ransomware’s Rapid Evolution: Ransomware technology and its associated criminal business models have seen significant advancements. This has culminated in a heightened threat level, resembling a pandemic’s reach and impact. Yet, there are strategies available for threat mitigation. See: Ransomware, and update.
Challenges in Cyber “Net Assessment”: While leaders have long tried to gauge both cyber risk and security, actionable metrics remain elusive. Current metrics mainly determine if a system can be compromised without guaranteeing its invulnerability. It’s imperative not just to develop action plans against risks but to contextualize the state of cybersecurity concerning cyber threats. Despite its importance, achieving a reliable net assessment is increasingly challenging due to the pervasive nature of modern technology. See: Cyber Threat
Decision Intelligence for Optimal Choices: Numerous disruptions complicate situational awareness and can inhibit effective decision-making. Every enterprise should evaluate its data collection methods, assessment, and decision-making processes for more insights: Decision Intelligence.
Proactive Mitigation of Cyber Threats: The relentless nature of cyber adversaries, whether they are criminals or nation-states, necessitates proactive measures. It’s crucial to remember that cybersecurity isn’t solely the IT department’s or the CISO’s responsibility – it’s a collective effort involving the entire leadership. Relying solely on governmental actions isn’t advised given its inconsistent approach towards aiding industries in risk reduction. See: Cyber Defenses
The Necessity of Continuous Vigilance in Cybersecurity: The consistent warnings from the FBI and CISA concerning cybersecurity signal potential large-scale threats. Cybersecurity demands 24/7 attention, even on holidays. Ensuring team endurance and preventing burnout by allocating rest periods are imperative. See: Continuous Vigilance
Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront unpredictable external threats. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. Regardless of their size, all organizations should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning