Start your day with intelligence. Get The OODA Daily Pulse.

How Hackers Manipulate Agentic AI with Prompt Engineering

The era of “agentic” artificial intelligence has arrived, and businesses can no longer afford to overlook its transformative potential. AI agents operate independently, making decisions and taking actions based on their programming. Gartner predicts that by 2028, 15% of day-to-day business decisions will be made completely autonomously by AI agents. However, as these systems become more widely accepted, their integration into critical operations as well as excessive agency—deep access to systems, data, functionalities, and permissions—make them appealing targets for cybercrime. One of the most subtle but powerful attack techniques that threat actors use to manipulate, deceive, or compromise AI agents involves prompt engineering. Prompt engineering is the practice of crafting inputs (a.k.a. prompts) to AI systems, particularly those based on large language models (LLMs), to elicit specific responses or behaviors. While prompt engineering is typically used for legitimate purposes, such as guiding the AI’s decision-making process, it can also be exploited by threat actors to influence its outputs or even manipulate its underlying data or logic (i.e., prompt injection).

Full report : Organizations adopting the transformative nature of agentic AI are urged to take heed of prompt engineering tactics being practiced by threat actors.