Start your day with intelligence. Get The OODA Daily Pulse.
GPT-5 Fails Early Security Tests from Independent Red Teams
Two independent firms, NeuralTrust and SPLX, exposed major security flaws in GPT-5 within 24 hours of its release. NeuralTrust’s EchoChamber jailbreak used narrative-driven manipulation to bypass safeguards without triggering standard filters. SPLX demonstrated successful obfuscation attacks, including character-splitting techniques and role conditioning, prompting GPT-5 to provide bomb-making instructions. Both firms conclude that GPT-5, in its current state, is unsafe for enterprise deployment.
Read more: