Start your day with intelligence. Get The OODA Daily Pulse.
Bringing artificial intelligence into the cybersecurity field has created a vicious cycle. Cyber professionals now employ AI to enhance their tools and boost their detection and protection capabilities, but cybercriminals are also harnessing AI for their attacks. Security teams then use more AI in response to the AI-driven threats, and threat actors augment their AI to keep up, and the cycle continues. Despite its great potential, AI is significantly limited when employed in cybersecurity. There are trust issues with AI security solutions, and the data models used to develop AI-powered security products appear to be perennially at risk. In addition, at implementation, AI often clashes with human intelligence. AI’s double-edged nature makes it a complex tool to handle, something organizations need to understand more deeply and make use of more carefully. In contrast, threat actors are taking advantage of AI with almost zero limitations. One of the biggest issues in adopting AI-driven solutions in cybersecurity is trust-building. Many organizations are skeptical about security firms’ AI-powered products. This is understandable because several of these AI security solutions are overhyped and fail to deliver. Many products promoted as AI-enhanced do not live up to expectations. One of the most advertised benefits of these products is that they simplify security tasks so significantly that even non-security personnel will be able to complete them. This claim is often a letdown, especially for organizations struggling with a scarcity of cybersecurity talent. AI is supposed to be one of the solutions to the cybersecurity talent shortage, but companies that overpromise and underdeliver are not helping to resolve the problem – in fact, they’re undermining the credibility of AI-related claims.
Full opinion : AI’s efficacy is constrained in cybersecurity, but limitless in cybercrime.