Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Cyber > Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise

Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise

GPT-5 Fails Early Security Tests from Independent Red Teams

Two independent firms, NeuralTrust and SPLX, exposed major security flaws in GPT-5 within 24 hours of its release. NeuralTrust’s EchoChamber jailbreak used narrative-driven manipulation to bypass safeguards without triggering standard filters. SPLX demonstrated successful obfuscation attacks, including character-splitting techniques and role conditioning, prompting GPT-5 to provide bomb-making instructions. Both firms conclude that GPT-5, in its current state, is unsafe for enterprise deployment.

Read more:

https://www.securityweek.com/red-teams-breach-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise/

Tagged: AI Business ChatGPT