Start your day with intelligence. Get The OODA Daily Pulse.
The researcher exploited hidden comments to extract secrets and manipulate responses.
A security researcher discovered a vulnerability in GitHub Copilot Chat that allowed attackers to steal sensitive data from private repositories and control the AI assistant’s responses to other users. The flaw combined a content security policy bypass with remote prompt injection through hidden HTML comments, enabling the extraction of AWS keys and zero-day vulnerabilities. The researcher bypassed GitHub’s protective Camo proxy by creating a dictionary of pre-generated valid URLs for each letter and symbol, which could reassemble stolen data when victims clicked on malicious links. GitHub fixed the issue in August by preventing Camo from being used to leak user information.
Read more:
https://www.securityweek.com/github-copilot-chat-flaw-leaked-data-from-private-repositories/