Share this article
Latest news
With KB5043178 to Release Preview Channel, Microsoft advises Windows 11 users to plug in when the battery is low
Copilot in Outlook will generate personalized themes for you to customize the app
Microsoft will raise the price of its 365 Suite to include AI capabilities
Death Stranding Director’s Cut is now Xbox X|S at a huge discount
Outlook will let users create custom account icons so they can tell their accounts apart easier
Study finds exploits in ChatGPT, hackers can read all your conversations
Other AI-based chatbots had similar vulnerabilities, except Google Gemini
4 min. read
Published onMarch 18, 2024
published onMarch 18, 2024
Share this article
Read our disclosure page to find out how can you help Windows Report sustain the editorial teamRead more
If you share personal matters with ChatGPT or ask questions where private information is disclosed, stop that right away. A recent study suggests that chatbots, including ChatGPT, could be hacked, and all your conversations/chats may be accessible to attackers!
In the study conducted at Israel’sBen-Gurion University, researchers have found aside channelexploit present in almost every popular AI-based chatbot, except Google Gemini, that could reveal the entire conversation with high accuracy, though not100%.
In an email toArs Technica,Yisroel Mirsky, head of theOffensive AI Research Labat theBen-Gurion University,said,
Researchers shed light on the vulnerabilities in AI-based chatbots
The study is complex and slightly tricky for a regular user to comprehend. In simple terms, researchers exploited the side channel vulnerability to acquire tokens (data that helps LLMs translate user inputs) and then used them to infer the conversation with a55%accuracy.
Researchers leveraged the side channel attack technique because, with it, instead of directly attacking the system, they could gather the information it shared inadvertently. This way, they could bypass the built-in protection protocols, including encryption.
These tokens were then run through two specially trained LLMs (Large Language Models), which could translate the contents into text format. This is practically impossible if you take a manual approach.
However, since chatbots have a distinct style, researchers were able to train LLMs to effectively decipher the prompts. One LLM was trained to identify the first sentence of the response, while the other worked on inner sentences based on the context.
In the email toArs Technica, they explain it as,
This breakthrough is also mentioned in their research paper.
The researchers have shared a demo video of the entire process, fromTraffic InterceptiontoResponse Inference, onYouTube.
So, your ChatGPT chat isn’t as safe as you thought, and hackers may easily read it! Even though theside channelexploit wasn’t present in Google’s chatbot,researchers have hacked into Gemini AI and Cloud Consolepreviously.
Besides, there has been asignificant rise in cyberattacksafter AI became mainstream. A recent report by Microsoft suggests that in the UK,87% of companies stand the risk of facing AI-powered cyber attacks.
Microsoft’s President,Brad Smith, had his share ofconcerns regarding AIand called for immediate regulations!
What do you think about the rise of AI? Share with our readers in the comments section.
More about the topics:AI,ChatGPT
Kazim Ali Alvi
Windows Hardware Expert
Kazim has always been fond of technology, be it scrolling through the settings on his iPhone, Android device, or Windows PC. He’s specialized in hardware devices, always ready to remove a screw or two to find out the real cause of a problem.
Long-time Windows user, Kazim is ready to provide a solution for your every software & hardware error on Windows 11, Windows 10 and any previous iteration. He’s also one of our experts in Networking & Security.
User forum
0 messages
Sort by:LatestOldestMost Votes
Comment*
Name*
Email*
Commenting as.Not you?
Save information for future comments
Comment
Δ
Kazim Ali Alvi
Windows Hardware Expert
Kazim is specialized in hardware devices, always ready to remove a screw or two to find out the real cause of a problem.