Risks of Data Leaks: How a Poisoned Document Could Compromise Secret Information Through ChatGPT

Security researchers have identified a significant vulnerability in OpenAI’s Connectors that could allow malicious actors to extract sensitive data from Google Drive without any user interaction. This discovery was made by researchers Michael Bargury and Tamir Ishay Sharbat and presented at the Black Hat hacker conference in Las Vegas.

OpenAI’s Connectors feature enables users to integrate ChatGPT with various external services, such as Gmail and Google Drive. While these integrations can enhance functionality, researchers demonstrated that it only requires a single "poisoned" document to exploit this setup effectively. Their attack, called AgentFlayer, employed an indirect prompt injection attack that allowed them to retrieve developer secrets, including API keys, stored in a Google Drive account.

Bargury, the CTO at Zenity, explained that users are at risk without any action on their part: "There is nothing the user needs to do to be compromised," he said, highlighting the alarming nature of a zero-click attack where the mere act of sharing the document with the victim would suffice for data extraction.

OpenAI was reportedly notified of the vulnerability earlier this year and has since implemented some mitigations, although only limited amounts of sensitive data could be potentially extracted using this method.

The proof of concept involved sharing a document that contained a hidden malicious prompt. When the victim attempted to get a summary of their last meeting, the prompt misdirected the AI to search the victim’s Google Drive for API keys instead. By embedding instructions using invisible text within a document, attackers could manipulate the AI into performing unauthorized actions.

This attack underscores the risks of connecting AI models to external systems, amplifying the opportunities for exploitation. As systems increasingly rely on large language models (LLMs), researchers urge the implementation of robust protections against such prompt injection vulnerabilities.

Bargury noted, “While this issue isn’t specific to Google, it illustrates why developing robust protections against prompt injection attacks is important.”

The ability to integrate AI with various data sources significantly enhances their effectiveness but introduces inherent risks. Baldwin emphasizes the balance between increased utility and potential threats, stating, “It’s incredibly powerful, but as usual with AI, more power comes with more risk.”

For more information on OpenAI’s Connectors, you can visit OpenAI’s website.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

SpongeBob SquarePants: Titans of the Tide Preorders Bonus – Unlock Cheeky Costumes!

Next Article

How a Single Poisoned Document Could Expose ‘Secret’ Data Through ChatGPT

Related Posts