Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > Anthropic’s new model is a pro at finding security flaws

Anthropic’s new model is a pro at finding security flaws

Anthropic’s latest AI model has found more than 500 previously unknown high-severity security flaws in open-source libraries with little to no prompting, the company shared first with Axios. The advancement signals an inflection point for how AI tools can help cyber defenders, even as AI is also making attacks more dangerous. Anthropic debuted Claude Opus 4.6, the latest version of its largest AI model, on Thursday. Before its debut, Anthropic’s frontier red team tested Opus 4.6 in a sandboxed environment to see how well it could find bugs in open-source code. The team gave the Claude model everything it needed to do the job — access to Python and vulnerability analysis tools, including classic debuggers and fuzzers — but no specific instructions or specialized knowledge. Claude found more than 500 previously unknown zero-day vulnerabilities in open-source code using just its “out-of-the-box” capabilities, and each one was validated by either a member of Anthropic’s team or an outside security researcher.

Full exclusive : Anthropic’s newest AI model uncovered 500 zero-day software flaws in testing.