Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Global AI Developers Need to Set Some Standards – Now

Recently a collaboration between Microsoft and OpenAI found that nation state actors with ties to China, Iran, North Korea, and Russia have been engaged in leveraging artificial intelligence (AI) large language models (LLM) to enhance and bolster ongoing offensive cyber operations.  Specifically, Open AI acknowledged that it had disrupted attempts by state actors to leverage its AI services.  Though indications at this point do not suggest that state threat actors have forged ways to harness the robustness of generative AI’s functionality to execute novel attacks, the intimation is quite clear.  With so many governments like China dedicating state resources to fully understand how to capitalize on this powerful technology, this is a matter of when and not if.  And while China has made it quite clear that it is committed to being the global leader in AI, the fast tracking of countries like Iran and North Korea should persuade the United States not to be short-sighted as to from where the threat could potentially come.  When it comes to staunch adversaries, the first one able to execute AI to its advantage in cyber operations, will undoubtedly test its capabilities.

When it comes to malicious AI use, many experts have been concerned with adversaries leveraging generative AI to support various aspects of their attacks to include reconnaissance, social engineering, and malware development.  With respect to the five incidents that have been observed, all appear to be headed in a similar path with respect to trying to understand how best to harness LLMs to produce the best effect.  Notably, APT 28 (Russia), Kimusky (North Korea), Imperial Kitten (Iran), and Aquatic Panda and Maverick Panda (China) had been observed benefitting from a variety of OpenAI capabilities to include but not limited to open source research, identifying potential targets, code creation and resolving coding errors, vulnerability research, and translating foreign technical papers.  And while these types of activities are not novel, they do show the diversity with which AI can be used and for a range of functions whether directly supporting genuine cyber research or cyber operations.

The purpose of Microsoft’s and OpenAI’s collaboration is to make sure that AI technologies are being used responsibly, and that those using AI do so under the highest ethical standards in order to reduce the threat its misuse.  That may be a challenge given that many of the aforementioned activities are not inherently nefarious themselves.  What makes them potentially dangerous is that they are linked to state actors that have been attributed to cyber attacks, and their use of AI appears to be consistent with pre-attack behavior they would have done regardless.  Simply, the technology has not made these actors engage in nefarious cyber activity as much as made this work easier for them.  But per Microsoft, AI has done likewise for network defenders, and the company stated in a blog that it has committed to using generative AI to disrupt these very same threat actors.

The speed with which AI technology could be harnessed by both attackers and defenders prompted the White House to issue an Executive Order to mitigate the risks associated with AI, mandating safety testing and government supervision for AI systems that potentially impact national economic security or public safety.  Microsoft has embraced the order, citing how it would implement its own policies to address risks of state actors and cybercriminal groups exploiting Microsoft AI tools and application programming interfaces, collaborating with other relevant stakeholders and AI service providers to disrupt hostile activities.  This seems a prompt, proactive means that should help stem misuse of available U.S. AI technologies, at least for the near term.

However, if adversarial nation states are relying on OpenAI or Microsoft for their generative AI needs, they will likely not do so for long as other non-U.S. alternatives become available.  The rise of ChatGPT has catalyzed a race between big tech companies to develop their own generative AI products.  One company tracking AI had at least 60 countries investing in AI to some capacity with China, Singapore, Israel, South Korea, and Germany among the top ten besides the United States.  While there is already considerable movement in this area, any success by Microsoft to neutralize “nefarious” generative AI use will likely only accelerate efforts for states to develop their own tools. 

What is notable is that while there has been mention about misuse and abuse of AI by “adversarial” states, there doesn’t seem to be any guidelines by Microsoft or any other generative AI developer as to how such principles should be applied to all state actors.  Private sector stakeholders, especially those developing the technology, seem like logical leaders to shape what responsible use of their technologies looks like, and not just ones that are deemed “bad” by a company that aligns with the U.S. view on what states constitute adversarial cyber state actors.  What’s more, Microsoft, as well as other leading U.S. AI developers in concert with government agencies, recently formed a consortium to focus on the safe development of AI technologies in accordance with the Executive Order, a move that will no doubt further propel other states to come up with their own solutions.  This will likely drive an AI arms race making it even more difficult for states to try to find common ground in global cybersecurity matters, and giving another reason for countries to carefully consider which side best represents their interests when it comes to cyber matters.

Technology is agnostic and doesn’t take sides in any conflict.  As such, it is always subject to the perception of how it’s used and by whom and against what to determine whether it’s good or bad.  While generative AI technology is still nascent, states recognize the importance of being on the spear’s tip with respect to advancements, as developments will happen quickly.  The international community has typically responded to matters like these by coming together in voluntary agreements to show good faith of state responsibility.  However, while these agreements show good faith, they are not legally binding, and states are not subject to punitive repercussion should they be caught going against the principles they had pledged to support.

Too many times the world has been caught flat-footed by technological advances.  With generative AI there has been ample expectation of its capabilities for both good and bad not to be prepared for what is already materializing. And this needs to happen on a global scale with international AI developers helping set the standards, as this technology is for global consumption.  This is a real chance for industry to take the lead, and it needs to do so now.

Emilio Iasiello

About the Author

Emilio Iasiello

Emilio Iasiello has nearly 20 years’ experience as a strategic cyber intelligence analyst, supporting US government civilian and military intelligence organizations, as well as the private sector. He has delivered cyber threat presentations to domestic and international audiences and has published extensively in such peer-reviewed journals as Parameters, Journal of Strategic Security, the Georgetown Journal of International Affairs, and the Cyber Defense Review, among others. All comments and opinions expressed are solely his own.