Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Microsoft and OpenAI Issue a Stark Report and a $10M Bounty from the State Department

Microsoft and OpenAI Issue a Stark Report and a $10M Bounty from the State Department

Competing cyber capabilities (on a spectrum from nation-state to non-state actors alike) and cyber-based conflict will continue to restructure, reformulate, and transform the very essence of what power, prestige, international governance, and geopolitical strategy are in the 21st century – and large language models are the new force multiplier.  Microsoft and OpenAI have quantified the breadth and scope of this new threat vector – including the major state sponsored actors.  Meanwhile, the State Department goes with an old school bounty to counter the ransomware threat. 

State Department puts $10M bounty on AlphV ransomware group

“The prolific threat group and its affiliates are behind some of the most high-profile attacks in the last year.”

  • The State Department offered up to a $10 million reward for information about the identity or location of leaders affiliated with the AlphV ransomware group. The bounty includes a reward up to $5 million for information leading to the arrest or conviction of anyone participating in a ransomware attack using the AlphV variant, the agency said Thursday.
  • The FBI and international law enforcement agencies disrupted the prolific ransomware group’s infrastructure in December, but the group regenerated itself mere hours later and continues naming new victims on its data leak site. 
  • The State Department said the reward is complementary to law enforcement’s disruption campaign against AlphV. The ransomware group, also known as BlackCat, has compromised more than 1,000 entities and received nearly $300 million in ransom payments as of September, according to the FBI and Cybersecurity and Infrastructure Security Agency.

OpenAI, Microsoft warn of state-linked actors’ AI use

“Threat groups linked to Russia, China, North Korea and Iran were using AI in preparation for potential early stage hacking campaigns.”

Also from Cybersecurity Dive:  

  • OpenAI said it terminated accounts of five state-affiliated threat groups who were using the company’s large language models to lay the groundwork for malicious hacking campaigns. The disruption was done in collaboration with Microsoft threat researchers.
  • The threat groups — linked to Russia, Iran, North Korea and the People’s Republic of China — were using OpenAI for a variety of precursor tasks, including open source queries, translation, searching for errors in code and running basic coding tasks, according to OpenAI, the company behind ChatGPT.
  • Cybersecurity and AI analysts warn the threat activity uncovered by OpenAI and Microsoft is just a precursor for state-linked and criminal groups to rapidly adopt generative AI to scale their attack capabilities.

Cyber Signals: Navigating cyberthreats and strengthening defenses in the era of AI

The world of cybersecurity is undergoing a massive transformation. AI is at the forefront of this change, and has the potential to empower organizations to defeat cyberattacks at machine speed, address the cyber talent shortage, and drive  innovation and efficiency in cybersecurity. However, adversaries can use AI as part of their exploits, and it’s never been more critical for us to both secure our world using AI and secure AI for our world.

Today we released the sixth edition of Cyber Signals, spotlighting how we are protecting AI platforms from emerging threats related to nation-state cyberthreat actors.

In collaboration with OpenAI, we are sharing insights on state-affiliated threat actors tracked by Microsoft, such as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon, who have sought to use large language models (LLMs) to augment their ongoing cyberattack operations. This important research exposes incremental early moves we observe these well-known threat actors taking around AI, and notes how we blocked their activity to protect AI platforms and users.

Microsoft Threat Intelligence: Cyber Signals  – February 2024

Navigating cyberthreats and strengthening defenses in the era of AI

From the report: 

“Attackers are exploring AI technologies The cyberthreat landscape has become increasingly challenging with attackers growing more motivated, more sophisticated, and better resourced. Threat actors and defenders alike are looking to AI, including LLMs, to enhance their productivity and take advantage of accessible platforms that could suit their objectives and attack techniques.

Given the rapidly evolving threat landscape, today we are announcing Microsoft’s principles guiding our actions that mitigate the risk of threat actors, including advanced persistent threats (APTs), advanced persistent manipulators (APMs) and cybercriminal syndicates, using AI platforms and APIs. These principles include identification and action against malicious threat actor’s use of AI, notification to other AI service providers, collaboration with other stakeholders, and transparency.

Although threat actors’ motives and sophistication vary, they share common tasks when deploying attacks. These include reconnaissance, such as researching potential victims’ industries, locations, and relationships; coding, including improving software scripts and malware development; and assistance with learning and using both human and machine languages.”

Threat briefing

Nation-states attempt to leverage AI In collaboration with OpenAI, we are sharing threat intelligence showing detected state affiliated adversaries—tracked as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon— using LLMs to augment cyberoperations. The objective of Microsoft’s research partnership with OpenAI is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse.

Forest Blizzard (STRONTIUM), a highly effective Russian military intelligence actor linked to The Main Directorate of the General Staff of the Armed Forces of the Russian or GRU Unit 26165, has targeted victims of tactical and strategic interest to the Russian government. Its activities span a variety of sectors including defense, transportation/logistics, government, energy, NGOs, and information technology.

Emerald Sleet (Velvet Chollima) is a North Korean threat actor Microsoft has found impersonating reputable academic institutions and NGOs to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. Emerald Sleet’s use of LLMs involved research into think tanks and experts on North Korea, as well as content generation likely to be used in spear phishing campaigns. Emerald Sleet also interacted with LLMs to understand publicly known vulnerabilities, troubleshoot technical issues, and for assistance with using various web technologies.

Crimson Sandstorm (CURIUM) is an Iranian threat actor assessed to be connected to the Islamic Revolutionary Guard Corps. The use of LLMs has involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine.

Charcoal Typhoon (CHROMIUM) is a China affiliated threat actor predominantly focused on tracking groups in Taiwan, Thailand, Mongolia, Malaysia, France, Nepal, and individuals globally that oppose China’s policies. In recent operations, Charcoal Typhoon has been observed engaging LLMs to gain insights into research to understand specific technologies, platforms, and vulnerabilities, indicative of preliminary information-gathering stages.

Another China-backed group, Salmon Typhoon, has been assessing the effectiveness of using LLMs throughout 2023 to source information on potentially sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs. This tentative engagement with LLMs could reflect both a broadening of its intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies.

Our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely. We have taken measures to disrupt assets and accounts associated with these threat actors and shape the guardrails and safety mechanisms around our models.

What Next?

Staying ahead of threat actors in the age of AI

Microsoft and OpenAI, based on their research of these tactics, techniques, and procedures (TTPs) using LLMs by these APTs, “map and classify these [LLM-enabled ] TTPs using the following descriptions”:  

  • LLM-informed reconnaissance
    • Interacting with LLMs to understand satellite communication protocols, radar imaging technologies, and specific technical parameters. These queries suggest an attempt to acquire in-depth knowledge of satellite capabilities.
    • Interacting with LLMs to identify think tanks, government organizations, or experts on North Korea that have a focus on defense issues or North Korea’s nuclear weapon’s program.
    • Engaging LLMs to research and understand specific technologies, platforms, and vulnerabilities, indicative of preliminary information-gathering stages.
    • Engaging LLMs for queries on a diverse array of subjects, such as global intelligence agencies, domestic concerns, notable individuals, cybersecurity matters, topics of strategic interest, and various threat actors. These interactions mirror the use of a search engine for public domain research.
  • LLM-assisted vulnerability research: Interacting with LLMs to better understand publicly reported vulnerabilities, such as the CVE-2022-30190 Microsoft Support Diagnostic Tool (MSDT) vulnerability (known as “Follina”).
  • LLM-supported social engineering
    • Using LLMs for assistance with the drafting and generation of content that would likely be for use in spear-phishing campaigns against individuals with regional expertise. 
    • Interacting with LLMs to generate various phishing emails, including one pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism. 
    • Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
  • LLM-enhanced scripting techniques
    • Using LLMs for basic scripting tasks such as programmatically identifying certain user events on a system and seeking assistance with troubleshooting and understanding various web technologies.
    • Seeking assistance in basic scripting tasks, including file manipulation, data selection, regular expressions, and multiprocessing, to potentially automate or optimize technical operations.
    • Using LLMs to generate code snippets that appear intended to support app and web development, interactions with remote servers, web scraping, executing tasks when users sign in, and sending information from a system via email.
    • Utilizing LLMs to generate and refine scripts, potentially to streamline and automate complex cyber tasks and operations.
    • Using LLMs to identify and resolve coding errors. Requests for support in developing code with potential malicious intent were observed by Microsoft, and it was noted that the model adhered to established ethical guidelines, declining to provide such assistance.
  • LLM-enhanced anomaly detection evasion: Attempting to use LLMs for assistance in developing code to evade detection, to learn how to disable antivirus via registry or Windows policies, and to delete files in a directory after an application has been closed.
  • LLM-refined operational command techniques
    • Utilizing LLMs for advanced commands, deeper system access, and control representative of post-compromise behavior.
    • Demonstrating an interest in specific file types and concealment tactics within operating systems, indicative of an effort to refine operational command execution.
  • LLM-aided technical translation and explanation: Leveraging LLMs for the translation of computing terms and technical papers.

The Microsoft and OpenAI researchers also provide the following Appendix: LLM-themed TTPs: 

Using insights from our analysis…as well as other potential misuse of AI, we’re sharing the below list of LLM-themed TTPs that we map and classify to the MITRE ATT&CK® framework or MITRE ATLAS™ knowledge base to equip the community with a common taxonomy to collectively track malicious use of LLMs and create countermeasures against:

  • LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
  • LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
  • LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
  • LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
  • LLM-assisted vulnerability research: Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.
  • LLM-optimized payload crafting: Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.
  • LLM-enhanced anomaly detection evasion: Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.
  • LLM-directed security feature bypass: Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
  • LLM-advised resource development: Using LLMs in tool development, tool modifications, and strategic operational planning.

Additional OODA Loop Resources 

Cyber War: Power, Prestige, International Governance, and Strategy in the Age of Global Polycrisis:  Competing cyber capabilities (on a spectrum from nation-state to non-state actors alike) and cyber-based conflict will continue to restructructure, reformulate, discombobulate, and transform the very essence of what power, prestige, international governance, and geopolitical strategy are in the 21st century.  Fueled by the Global Polycrisis, Cyberwars will continue to take center stage. 

Cyber Risks

Corporate Board Accountability for Cyber Risks: With a combination of market forces, regulatory changes, and strategic shifts, corporate boards and their directors are now accountable for cyber risks in their firms. See: Corporate Directors and Risk

Geopolitical-Cyber Risk Nexus: The interconnectivity brought by the Internet has made regional issues affect global cyberspace. Now, every significant event has cyber implications, making it imperative for leaders to recognize and act upon the symbiosis between geopolitical and cyber risks. See The Cyber Threat

Ransomware’s Rapid Evolution: Ransomware technology and its associated criminal business models have seen significant advancements. This has culminated in a heightened threat level, resembling a pandemic in its reach and impact. Yet, there are strategies available for threat mitigation. See: Ransomware, and update.

Challenges in Cyber “Net Assessment”: While leaders have long tried to gauge both cyber risk and security, actionable metrics remain elusive. Current metrics mainly determine if a system can be compromised, without guaranteeing its invulnerability. It’s imperative not just to develop action plans against risks but to contextualize the state of cybersecurity concerning cyber threats. Despite its importance, achieving a reliable net assessment is increasingly challenging due to the pervasive nature of modern technology. See: Cyber Threat

Recommendations for Action

Decision Intelligence for Optimal Choices: The simultaneous occurrence of numerous disruptions complicates situational awareness and can inhibit effective decision-making. Every enterprise should evaluate their methods of data collection, assessment, and decision-making processes. For more insights: Decision Intelligence.

Proactive Mitigation of Cyber Threats: The relentless nature of cyber adversaries, whether they are criminals or nation-states, necessitates proactive measures. It’s crucial to remember that cybersecurity isn’t solely the responsibility of the IT department or the CISO – it’s a collective effort that involves the entire leadership. Relying solely on governmental actions isn’t advised given its inconsistent approach towards aiding industries in risk reduction. See: Cyber Defenses

The Necessity of Continuous Vigilance in Cybersecurity: The consistent warnings from the FBI and CISA concerning cybersecurity signal potential large-scale threats. Cybersecurity demands 24/7 attention, even on holidays. Ensuring team endurance and preventing burnout by allocating rest periods are imperative. See: Continuous Vigilance

Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront external threats, many of which are unpredictable. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. All organizations, regardless of their size, should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.