Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > The Financial Sector and Managing Artificial Intelligence-Specific Cybersecurity Risks

Details of a recent report from the Treasury Department on AI-Specific Cybersecurity Risks in the Financial Sector.  

U.S. Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector 

About the Report

the U.S. Department of the Treasury released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector. The report was written at the direction of Presidential Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.   Treasury’s Office of Cybersecurity and Critical Infrastructure Protection (OCCIP) led the development of the report. OCCIP executes the Treasury Department’s Sector Risk Management Agency responsibilities for the financial services sector.
 
As part of the Treasury’s research for this report, it conducted in-depth interviews with 42 companies in the financial services sector and technology-related companies. Financial firms of all sizes, from global systemically important financial institutions to local banks and credit unions, provided input on how AI is used within their organizations. Additional stakeholders included major technology companies and data providers, financial sector trade associations, cybersecurity and anti-fraud service providers, and regulatory agencies. Treasury’s report provides an extensive overview of current AI use cases for cybersecurity and fraud prevention, as well as best practices and recommendations for AI use and adoption. The report does not impose any requirements and does not endorse or discourage the use of AI within the financial sector. 
 

Recommendations from the Report

In the report, Treasury identifies significant opportunities and challenges that AI presents to the security and resiliency of the financial services sector. The report outlines a series of next steps to address immediate AI-related operational risk, cybersecurity, and fraud challenges: 
  1. Addressing the growing capability gap. There is a widening gap between large and small financial institutions when it comes to in-house AI systems. Large institutions are developing their own AI systems, while smaller institutions may be unable to do so because they lack the internal data resources required to train large models. Additionally, financial institutions that have already migrated to the cloud may have an advantage when it comes to leveraging AI systems in a safe and secure manner.
  2. Narrowing the fraud data divide. As more firms deploy AI, a gap exists in the data available to financial institutions for training models. This gap is significant in the area of fraud prevention, where there is insufficient data sharing among firms. As financial institutions work with their internal data to develop these models, large institutions hold a significant advantage because they have far more historical data. Smaller institutions generally lack sufficient internal data and expertise to build their own anti-fraud AI models.
  3. Regulatory coordination. Financial institutions and regulators are collaborating on how best to resolve oversight concerns together in a rapidly changing AI environment. However, there are concerns about regulatory fragmentation, as different financial-sector regulators at the state and federal levels, and internationally, consider regulations for AI.
  4. Expanding the NIST AI Risk Management Framework. The National Institute of Standards and Technology (NIST) AI Risk Management Framework could be expanded and tailored to include more applicable content on AI governance and risk management related to the financial services sector.
  5. Best practices for data supply chain mapping and “nutrition labels.” Rapid advancements in generative AI have exposed the importance of carefully monitoring data supply chains to ensure that models are using accurate and reliable data, and that privacy and safety are considered. In addition, financial institutions should know where their data is and how it is being used. The financial sector would benefit from the development of best practices for data supply chain mapping. Additionally, the sector would benefit from a standardized description, similar to the food “nutrition label,” for vendor-provided AI systems and data providers. These “nutrition labels” would clearly identify what data was used to train the model, where the data originated, and how any data submitted to the model is being used.
  6. Explainability for black box AI solutions. Explainability of advanced machine learning models, particularly generative AI, continues to be a challenge for many financial institutions. The sector would benefit from additional research and development on explainability solutions for black-box systems like generative AI, considering the data used to train the models and the outputs and robust testing and auditing of these models. In the absence of these solutions, the financial sector should adopt best practices for using generative AI systems that lack explainability.
  7. Gaps in human capital. The rapid pace of AI development has exposed a substantial AI workforce talent gap for those skilled in both creating and maintaining AI models and AI users. A set of best practices for less-skilled practitioners on how to use AI systems safely would help manage this talent gap. In addition, a technical competency gap exists in teams managing AI risks, such as in legal and compliance fields. Role-specific AI training for employees outside of information technology can help educate these critical teams.
  8. A need for a common AI lexicon. There is a lack of consistency across the sector in defining what “artificial intelligence” is. Financial institutions, regulators, and consumers would all benefit greatly from a common AI-specific lexicon.
  9. Untangling digital identity solutions. Robust digital identity solutions can help financial institutions combat fraud and strengthen cybersecurity. However, these solutions differ in their technology, governance, and security, and offer varying levels of assurance. An emerging set of international, industry, and national digital identity technical standards is underway.
  10. International coordination. The path forward for regulation of AI in financial services remains an open question internationally. Treasury will continue to engage with foreign counterparts on the risks and benefits of AI in financial services.

Read Treasury’s AI Report here

What Next? 

Per a press release from Treasury, “In the coming months, Treasury will work with the private sector, other federal agencies, federal and state financial sector regulators, and international partners on key initiatives to address the challenges surrounding AI in the financial sector. While this report focuses on operational risk, cybersecurity, and fraud issues, Treasury will continue to examine a range of AI-related matters, including the impact of AI on consumers and marginalized communities.” 
 

A “Common Lexicon” 

As reported by CYBERSCOOP:  “Going forward, the financial services sector relayed that it would be helpful to have “a common lexicon” on AI tools to aid in more productive discussions with third parties and regulators, ensuring that all stakeholders are speaking the same language. Report participants also said their firms would “benefit from the development of best practices concerning the mapping of data supply chains and data standards.”

The Treasury Department said it would work with the financial sector, as well as NIST, the Cybersecurity and Infrastructure Security Agency and the National Telecommunications and Information Administration to further discuss potential recommendations tied to those asks.  In the coming months, Treasury officials will collaborate with industry, other agencies, international partners and federal and state financial sector regulators on critical initiatives tied to AI-related challenges in the sector.” 

Treasury: AI-fueled cyber threats bring new challenges

The ABA Banking Journal itemized “several next steps and opportunities,” including:

  • Need for common AI lexicon.
  • Address the growing capability gap between the largest and smallest financial institutions.
  • Narrow the fraud data divide.
  • Clarify how AI will be regulated in the future.
  • Expand the NIST AI Risk Management Framework.
  • Develop best practices for data supply chain mapping disclosures (aka “nutrition labels”).
  • Decipher explainability for black box AI solutions.
  • Address gaps in human capital.
  • Untangle digital identity solutions.
  • Coordinate with international authorities.

Additional OODA Loop Resources

For more OODA Loop News Briefs and Original Analysis, see:

Related References:

Is the US Government Over-Regulating Artificial Intelligence?  Maybe it is time to start thinking of ways to decontrol our AI.

Decontrol AI to Accelerate Solutions:  In a previous post we asked “Is the US Government Over-Regulating Artificial Intelligence?” We followed it with some historical context and a summary of today’s government controls in “Regulations on Government Use of AI.” This post builds on those two to examine an example on how a decontrol mindset can help speed AI solutions into use.

Regulations on Government Use of AI:  In a previous post in this series we raised a question: Is the US Government Over-Regulating AI? That post discussed the word “decontrol” as a concept whose time has come in discussion of AI use in the government.  But how can we know if we are over regulated without an overview of the regulatory environment? Here we provide a short overview of the regulation of government IT with an eye towards shedding light on ways to accelerate AI through some decontrol.

Perspectives on AI Hallucinations: Code Libraries and Developer Ecosystems:  Our hypothesis on AI Hallucinations is based on a quote from the OODAcon 2024 panel, “The Next Generative AI Surprise: “Artificial Intelligence hallucinations may sometimes provide an output that is a very creative interpretation of something or an edge case that proves useful.” With that framing in mind, the following is the first installment in our survey of differing perspectives on the threats and opportunities created by AI hallucinations.

The Next Generative AI Surprise:  At the OODAcon 2022 conference, we predicted that ChatGPT would take the business world by storm and included an interview with OpenAI Board Member and former Congressman Will Hurd.  Today, thousands of businesses are being disrupted or displaced by generative AI. This topic was further examined at length at OODAcon 2023, taking a closer look at this innovation and its impact on business, society, and international politics.  The following are insights from an OODAcon 2023 discussion between Pulkit Jaiswal, Co-Founder of NWO.ai, and Bob Flores, former CTO of the CIA.

What Can Your Organization Learn from the Use Cases of Large Language Models in Medicine and Healthcare?: It has become conventional wisdom that biotech and healthcare are the pace cars in implementing AI use cases with innovative business models and value-creation mechanisms.  Other industry sectors should keep a close eye on the critical milestones and pitfalls of the biotech/healthcare space – with an eye toward what platform, product, service innovations, and architectures may have a potable value proposition within your industry. The Stanford Institute for Human-Centered AI (HAI) is doing great work fielding research in medicine and healthcare environments with quantifiable results that offer a window into AI as a general applied technology during this vast but shallow early implementation phase across all industry sectors of “AI for the enterprise.”  Details here.

Two Emergent and Sophisticated Approaches to LLM Implementation in Cybersecurity: Google Security Engineering and The Carnegie Mellon University Software Engineering Institute (in collaboration with OpenAI), have sorted through the hype – and done some serious thinking and formal research on developing “better approaches for evaluating LLM cybersecurity” and AI-powered patching: the future of automated vulnerability fixes. This is some great formative framing of the challenges ahead as we collectively sort out the implications of the convergence of generative AI and future cyber capabilities (offensive and defensive).The Origins Story and the Future Now of Generative AI: This book explores generative artificial intelligence’s fast-moving impacts and exponential capabilities over just one year.

Generative AI – Socio-Technological Risks, Potential Impacts, Market Dynamics, and Cybersecurity Implications:  The risks, potential positive and negative impacts, market dynamics, and security implications of generative AI have emerged  – slowly, then rapidly, as the unprecedented hype cycle around artificial intelligence settled into a more pragmatic stoicism  – with project deployments – throughout 2023.

In the Era of Code, Generative AI Represents National Security Risks and Opportunities for “Innovation Power”:  We are entering the Era of Code. Code that writes code and code that breaks code. Code that talks to us and code that talks for us. Code that predicts and code that decides. Code that rewrites us. Organizations and individuals prioritizing understanding how the Code Era impacts them will develop increasing advantages in the future.  At OODAcon 2023, we will be taking a closer look at Generative AI innovation and its impact on business, society, and international politics. IQT and the Special Competitive Studies Project (SCSP) recently weighed in on this Generative AI “spark” of innovation that will “enhance all elements of our innovation power” – and the potential cybersecurity conflagrations that that same spark may also light. Details here.

Cyber Risks

Corporate Board Accountability for Cyber Risks: With a combination of market forces, regulatory changes, and strategic shifts, corporate boards and directors are now accountable for cyber risks in their firms. See: Corporate Directors and Risk

Geopolitical-Cyber Risk Nexus: The interconnectivity brought by the Internet has caused regional issues that affect global cyberspace. Now, every significant event has cyber implications, making it imperative for leaders to recognize and act upon the symbiosis between geopolitical and cyber risks. See The Cyber Threat

Ransomware’s Rapid Evolution: Ransomware technology and its associated criminal business models have seen significant advancements. This has culminated in a heightened threat level, resembling a pandemic’s reach and impact. Yet, there are strategies available for threat mitigation. See: Ransomware, and update.

Challenges in Cyber “Net Assessment”: While leaders have long tried to gauge both cyber risk and security, actionable metrics remain elusive. Current metrics mainly determine if a system can be compromised without guaranteeing its invulnerability. It’s imperative not just to develop action plans against risks but to contextualize the state of cybersecurity concerning cyber threats. Despite its importance, achieving a reliable net assessment is increasingly challenging due to the pervasive nature of modern technology. See: Cyber Threat

Recommendations for Action

Decision Intelligence for Optimal Choices: Numerous disruptions complicate situational awareness and can inhibit effective decision-making. Every enterprise should evaluate its data collection methods, assessment, and decision-making processes for more insights: Decision Intelligence.

Proactive Mitigation of Cyber Threats: The relentless nature of cyber adversaries, whether they are criminals or nation-states, necessitates proactive measures. It’s crucial to remember that cybersecurity isn’t solely the IT department’s or the CISO’s responsibility – it’s a collective effort involving the entire leadership. Relying solely on governmental actions isn’t advised given its inconsistent approach towards aiding industries in risk reduction. See: Cyber Defenses

The Necessity of Continuous Vigilance in Cybersecurity: The consistent warnings from the FBI and CISA concerning cybersecurity signal potential large-scale threats. Cybersecurity demands 24/7 attention, even on holidays. Ensuring team endurance and preventing burnout by allocating rest periods are imperative. See: Continuous Vigilance

Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront unpredictable external threats. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. Regardless of their size, all organizations should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning

Tagged: AI Cybersecurity
Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.