Start your day with intelligence. Get The OODA Daily Pulse.
The arrival and continued emergence of artificial intelligence (AI) has made a huge impact on the cyber ecosystem. With any new technology, the possibilities seem endless for its capabilities and uses. Generative AI has already been globally welcomed with estimates that 40% of companies in the world are using it in some capacity, and 82% are either using it or exploring its use. These statistics are consistent with the fact that 49% of companies are already using ChatGPT with 30% more planning to do so sometime in the future. There is little doubt that AI is a mainstay technology with applicability across all industries and sectors, offering enormous opportunity. Its ability to manage incredible amounts of data and assist decision-making on the human level are not lost on militaries as they compete to make themselves the dominant consumers of AI, making the most exciting new technology in the 21st century the most disruptive one as well.
Therefore, it comes as little surprise that AI’s quick evolution has been embraced not only by states looking for strategic advantage, but hostile nonstate actors that see quick implementation providing a leg-up against their adversaries – the network defenders. Indeed, the race is already on as attackers and defenders seek to operationalize AI the most effectively. And while the jury is still out as to which side will achieve that victory, early indications are that hostile actors have been best able to use it to support their attacks for now, leveraging AI to automate and increasing speed and scalability, while enhancing social engineering to create more polished seemingly legitimate content. This is not to say defenders haven’t been using AI for their own purposes. AI has demonstrated its capacity to proactively scour enterprises to improve an organization’s cybersecurity posture.
And this comes to the crux of the question – who will ultimately gain the advantage with AI technology, attackers or defenders?
There is an adage that has been long associated with the cybersecurity dilemma. Attackers have the luxury of time on their side, and they can fail a thousand times needing only once to achieve their objective, while defenders must be on point all the time to prevent a potentially catastrophic incident. At first blush, this seems like a no-brainer – that the attacker will always have the advantage. It is no secret that many cybersecurity teams lack the budget, resources, or manpower to address the volumes of alerts and threats they face on a daily basis. However, AI closes this gap significantly, and while AI machine learning can certainly expedite the attack cycle, it can also improve the speed in detection and response, thereby shortening the window attackers have to execute their attack plans. There is also the added benefit of defenders being able to disrupt their counterparts at any part of their attack chain once it is detected.
So, while a review of a substantial amount of the current literature on AI generally sees that for the immediate future (approximately 3-5 years) AI will benefit the attacker more so than the defender, there is optimism that this will quickly change. For example, in January 2024, an international conference on cybersecurity revealed that chief information security officers (CISO) see that scale tipping in favor of defenders at the enterprise level. CISOs in particular will benefit with some of the biggest challenges they face – vulnerability management, prompt detection, and mitigating threats. This indeed would reduce the volume of threats faced, and better position defenders to act quickly on those deemed most volatile to their environments.
Indeed, using AI to proactively discover the very anomalies attackers are looking to exploit would certainly position defenders better, as it would enhance security operations and allow teams to focus on the main threats facing their organizations. The fact that IT security teams are actively trying to realize AI in their defense practices is also promising. One survey on the state of cybersecurity found that of 950 respondents, approximately 75% said their organizations were spending more on AI than previous months, and that 85% said they were looking to apply AI in their security operations in the next year. What’s positive to see is that there’s an inherent understanding that AI can be a much-needed cyber defense game changer, even if some security teams are skeptical about how generative AI could adversely affect their organizations’ cybersecurity.
Still, if one of AI’s advantages is speed and automation, it begs the question at what point can security teams be fully capable using AI as part of its cybersecurity practices? In addition to fully understanding how to use the technology, security teams have limitations afforded to them by their organizations that may impact its acquisition, training, and rollout before they can truly benefit from the technology. For instance, there are approvals that are needed; financial investments to be made, incorporating the technology into the enterprise; and the ability to scale it across their environment which may be extensive, and not necessarily all uniform. Conversely, attackers do not have any of these constraints hindering the adoption and use of this technology, giving them a timing advantage when it comes to AI implementation. This will likely change sometime in the future, but chances are it will be longer than 3-5 years as some have previously estimated, putting attackers in the proverbial catbird seat for the foreseeable future. Ultimately, defenders’ biggest obstacle may be the pace of their technological adoption, as opposed to that of their adversaries.
There is every likelihood that for the regular nonstate hacker/cybercriminal, defenders will ultimately gain the advantage the longer AI is used and incorporated. But this may not be the case with more sophisticated cybercrime groups and certainly not with state actors that may be constantly investing in their own AI development. The key will really be how more organized nonstate actors and cybercrime groups utilize AI while monitoring how companies try to incorporate the technology into their own systems and tools to combat their actions. Moreover, the soft power of AI – content creation – is also a concern that cannot risk being overlooked. China is currently drafting a plan that will require platforms and online service providers to label all AI-generated material with a visible logo to alert consumers. Given the global concern regarding disinformation/misinformation, this seems like the kind of initiative that the world could get behind.
Many believe the longer AI’s machine learning is used to detect threats, the better it will become in recognizing threats and mitigating them, and make it perhaps more beneficial to defenders. This is very true. Any botched attack benefits the defender’s AI system. But not every attack will be a sophisticated one and the more advanced actors will invariably look to circumvent this strength of AI, such as burying the true attack in a batch of noise, trying to flood AI’s resources, or conducting multiple attacks incorporating different tactics, techniques, and procedures. None of these may work, but this is not to say that AI will crown defenders as the ultimate victors. The most persistent attackers in cyberspace are nothing if not proven innovators, known to think outside of the box to achieve their objectives, and being complacent about the strength of one’s security has never been proven to be a winning strategy.