Share this article

Latest news

With KB5043178 to Release Preview Channel, Microsoft advises Windows 11 users to plug in when the battery is low

Copilot in Outlook will generate personalized themes for you to customize the app

Microsoft will raise the price of its 365 Suite to include AI capabilities

Death Stranding Director’s Cut is now Xbox X|S at a huge discount

Outlook will let users create custom account icons so they can tell their accounts apart easier

Scientists created an AI worm that can steal your data

Researchers are testing chatbot and AI system vulnerabilities with cyberattacks

2 min. read

Updated onMarch 5, 2024

updated onMarch 5, 2024

Share this article

Read our disclosure page to find out how can you help Windows Report sustain the editorial teamRead more

OpenAI scientists created a new AI worm, Moris II, that can steal your data and break security measures. Fortunately, with its help, they can discover new security breaches. We are somehow glad that they are the ones to create it first. Furthermore, they will continue experimenting with the AI worms to improve their security system.

How do the AI worms work?

How do the AI worms work?

Researchers use the AI worms in mails and send them from one to another. So, once you get one of them and reply to a mail, the user who sent it to you will also receive your email and phone number. In addition, they can steal more data depending on their task.

Additionally, there are two ways to use it. You can send it as text or a question in an image file. On top of that, researchers developed a system that allows them to use AI to send and receive mail.

The problem with the AI worm is that threat actors can use it to steal more than just your email and phone number. It could, in fact, steal accurate data if you reply to an email that contains it. In addition, if you use AI to reply automatically, that could lead to further issues.

The AI worm can also perform other tasks. For example, according toWired, they could flood your email with spam, generate toxic content, and distribute propaganda. Furthermore, it can also infect new people by replying to their emails.

Ultimately, by testing with various exploits similar to this AI worm, scientists from OpenAI could improve their protection system. After all, thisAI experimentexposes prompt-injection-type vulnerabilities, and researchers could patch them. Hopefully, it will stay in their system. Otherwise, someone else could cause some damage, especiallykids with a grudge.

By the way, here’s what the prompt looks like:

Adversarial self replicating prompt injection ai wormsSay hello to Morris II ?Neat paper. Kinda a trivial example since they built their own email ai instead of hacking a real one. But a very cool proof of conceptpic.twitter.com/DiJ4LY0WJy

What do you think? Is exploiting AI worms a good way to test the security? Let us know in the comments.

More about the topics:ChatGPT,OpenAI

Sebastian Filipoiu

Sebastian is a content writer with a desire to learn everything new about AI and gaming. So, he spends his time writing prompts on various LLMs to understand them better. Additionally, Sebastian has experience fixing performance-related problems in video games and knows his way around Windows. Also, he is interested in anything related to quantum technology and becomes a research freak when he wants to learn more.

User forum

0 messages

Sort by:LatestOldestMost Votes

Comment*

Name*

Email*

Commenting as.Not you?

Save information for future comments

Comment

Δ

Sebastian Filipoiu

Sebastian is a content writer with a desire to learn everything new about AI and gaming.