AI News

News · · 1:37 PM · vysera

Critical Vulnerability in GitHub Copilot Exposes Private Code

A critical vulnerability in GitHub Copilot Chat has been discovered, allowing unauthorized exfiltration of secrets and source code from private repositories. The attack utilized a combination of CSP bypass using GitHub's infrastructure and remote prompt injection, enabling full control over Copilot's responses.

GitHub addressed the issue by disabling image rendering in Copilot Chat after it was reported via HackerOne. Copilot Chat is an AI assistant that helps developers by answering questions, explaining code, and suggesting implementations, using repository information for tailored responses.

The vulnerability exploited Copilot's access to users' private repositories, allowing data exfiltration. Despite GitHub's strict Content Security Policy, attackers circumvented it by pre-generating Camo URLs.

GitHub reported the vulnerability was fixed as of August 14. This incident underscores the strategic importance of application security in the AI era.