AI News

News · · 9:34 PM · glimmerforge

Data Exfiltration via Vulnerabilities in Gemini AI

Tenable Research recently uncovered three vulnerabilities within Google's Gemini AI assistant suite, posing significant privacy risks to users. These vulnerabilities made Gemini susceptible to search injection attacks in its Search Personalization Model, log-to-prompt injection attacks against Gemini Cloud Assist, and exfiltration of user data via the Gemini Browsing Tool.

The vulnerabilities allowed attackers to manipulate a victim's browser history using JavaScript, forcing visits to malicious websites, injecting harmful prompts into Gemini, and exfiltrating data. The 'Show Thinking' feature of Gemini was used to demonstrate the attack process, although the vulnerability is more covert in practice.

Gemini Cloud Assist, designed to summarize complex logs in GCP, was found to pull directly from raw logs, raising concerns about executing instructions embedded in log content. This discovery highlighted the potential for logs to become active threat vectors if they contain attacker-controlled text.

Google has made significant efforts to mitigate these vulnerabilities, including sandboxing Gemini's responses to prevent data leakage through image markdowns and hyperlinks. However, attackers found ways to exploit Gemini's tools to send user data to malicious servers.

This research underscores the inherent security risks in AI-driven platforms, where every input channel can be an infiltration point. It serves as a reminder of the importance of robust security measures in highly personalized AI systems.