Start your day with intelligence. Get The OODA Daily Pulse.
“Agentic AI systems are being weaponized.” That’s one of the first lines of Anthropic’s new Threat Intelligence report, out today, which details the wide range of cases in which Claude — and likely many other leading AI agents and chatbots — are being abused. First up: “Vibe-hacking.” One sophisticated cybercrime ring that Anthropic says it recently disrupted used Claude Code, Anthropic’s AI coding agent, to extort data from at least 17 different organizations around the world within one month. The hacked parties included healthcare organizations, emergency services, religious institutions, and even government entities. “If you’re a sophisticated actor, what would have otherwise required maybe a team of sophisticated actors, like the vibe-hacking case, to conduct — now, a single individual can conduct, with the assistance of agentic systems,” Jacob Klein, head of Anthropic’s threat intelligence team, told The Verge in an interview. He added that in this case, Claude was “executing the operation end-to-end.”