OpenAI's ChatGPT can easily be coaxed into leaking your personal data

Swordsmyth

Member
Joined
Apr 14, 2016
Messages
74,737
OpenAI's ChatGPT can easily be coaxed into leaking your personal data — with just a single "poisoned" document.

As Wired reports, security researchers revealed at this year's Black Hat hacker conference that highly sensitive information can be stolen from a Google Drive account with an indirect prompt injection attack. In other words, hackers feed a document with hidden, malicious prompts to an AI that controls your data instead of manipulating it directly with a prompt injection, one of the most serious types of security flaws threatening the safety of user-facing AI systems.

ChatGPT's ability to be linked to a Gmail account allows it to rifle through your files, which could easily expose you to simple hacks.

This latest glaring lapse in cybersecurity highlights the tech's enormous shortcomings, and raises concerns that your personal data simply isn't safe with these types of tools.

"There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out," security firm Zenity CTO Michael Bargury, who discovered the vulnerability with his colleagues, told Wired. "We’ve shown this is completely zero-click; we just need your email, we share the document with you, and that’s it. So yes, this is very, very bad."

Earlier this year, OpenAI launched its Connectors for ChatGPT feature in the form of a beta, giving the chatbot access to Google accounts that allow it to "search files, pull live data, and reference content right in the chat."

More at:
Code:
https://futurism.com/hackers-trick-chatgpt-personal-data

 
Back
Top