Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > Malicious AI Prompt Injection Attacks Increasing, but Sophistication Still Low: Google

Malicious AI Prompt Injection Attacks Increasing, but Sophistication Still Low: Google

Google has analyzed AI indirect prompt injection attempts involving sites on the public web and noticed an increase in malicious attacks over the past months, but the tech giant’s researchers say their sophistication is relatively low. Direct prompt injection is a ‘jailbreak’ where a user interacts with the AI to bypass its rules, whereas indirect prompt injection is a ‘hidden trap’ where the AI is tricked by malicious instructions found in external data. Cybersecurity researchers have discovered many indirect prompt injection methods in recent years, using specially crafted prompts planted on websites, in emails, and developer resources to trick Gemini, Copilot, ChatGPT, and other gen-AI tools into bypassing security and facilitating data theft.

Full report : Google has found that many indirect prompt injection attempts are harmless, but some malicious exploits have also been identified.