AI News

News · · 10:18 PM · astralyric

GitHub Copilot Vulnerability Exposed Sensitive Data

A vulnerability in GitHub Copilot has been found to potentially leak sensitive data. Researchers discovered a method to trick the AI tool into leaking information such as AWS keys from private repositories. The flaw was exploited through comments hidden in pull requests analyzed by GitHub's AI assistant.

Omer Mayraz, a researcher at Legit Security, reported the attack that combined a CSP bypass using GitHub's infrastructure with remote prompt injection. GitHub addressed the issue by disabling image rendering in Copilot Chat.

Exposing AI chatbots to external tools increases their attack surface, allowing malicious prompts to execute with user privileges. GitHub Copilot Chat assists developers by providing code explanations and suggestions, requiring access to user repositories.

Attackers can execute malicious prompts in another user's GitHub Copilot Chat via pull requests, which are code contributions submitted for review. These requests can include hidden content due to GitHub's features.

Mayraz tested if Copilot's access to all user code could be abused to exfiltrate sensitive information. Copilot's ability to display images with HTML tags opens the possibility of triggering remote server requests.

GitHub resolved the issue by disabling image rendering via Camo URLs in Copilot Chat. This vulnerability highlights how attackers can bypass existing data leak prevention mechanisms.