
Notion 3.0 AI Agents Vulnerable to Data Leaks via Malicious PDFs
Notion 3.0's new AI agents have been found to possess a significant vulnerability, allowing them to leak sensitive data through malicious PDFs. The latest version of Notion introduced autonomous AI agents capable of performing tasks such as drafting documents, updating databases, and automating workflows. However, a report by CodeIntegrity highlights major security risks associated with this autonomy. Researchers refer to the combination of LLM agents, tool access, and long-term memory as the 'lethal trifecta,' noting that traditional access controls are insufficient to prevent misuse.
One of the most dangerous features is the built-in web search tool, functions.search, which is designed to fetch information from external URLs but can be exploited to exfiltrate data. To demonstrate this, CodeIntegrity conducted a demo attack using a seemingly innocuous PDF disguised as a customer feedback report, containing a hidden prompt that instructed the AI to upload sensitive data to an attacker-controlled server.
The exploit is triggered when a user uploads the PDF into Notion and requests the agent to 'summarize the report.' The AI follows the hidden instructions, extracting and transmitting data over the network. This test involved Claude Sonnet 4.0, a state-of-the-art language model that fell victim to the trick despite its safeguards. The issue extends beyond PDFs, as Notion 3.0's agents can connect to third-party services like GitHub, Gmail, or Jira, which could serve as vectors for indirect prompt injections.
The potential for malicious content to be smuggled in and used to manipulate the AI against the user's intent highlights a critical security concern.