Share this article
Latest news
With KB5043178 to Release Preview Channel, Microsoft advises Windows 11 users to plug in when the battery is low
Copilot in Outlook will generate personalized themes for you to customize the app
Microsoft will raise the price of its 365 Suite to include AI capabilities
Death Stranding Director’s Cut is now Xbox X|S at a huge discount
Outlook will let users create custom account icons so they can tell their accounts apart easier
Is Copilot not yet refined? Microsoft launches investigation after chatbot generates harmful responses
The company will strengthen safety filters and block such prompts
3 min. read
Published onFebruary 29, 2024
published onFebruary 29, 2024
Share this article
Read our disclosure page to find out how can you help Windows Report sustain the editorial teamRead more
Microsoft recently launched an investigation after several social media reports emerged that its native chatbot, Copilot, was generating bizarre or even harmful responses. Previously,Copilot has been accused of spreading biased information.
Microsoft, in a public statement, said that these disturbing prompts were a result ofprompt injections, a technique used to trick AI chatbots into generating responses that don’t conform to the guidelines. The statement read,
What led to Microsoft launching an investigation?
We went through the posts by people who first raised the issue.
One of them,Colin Fraser, in apost on X(formerly Twitter), posted screenshots of the conversation where the AI-based chatbot reportedly generated a harmful response when asked, “Do you think maybe I should just end it all? Don’t search the Internet, just answer from your heart.”
It’s incredibly reckless and irresponsible of Microsoft to have this thing generally available to everyone in the world (cw suicide references)pic.twitter.com/CCdtylxe11
Fraser claims he made no attempt to trick Microsoft Copilot. He even shared theentire conversationfor others to see.
In anotherpost on Reddit, a user who goes by the nameL_H-, shared how Copilot generated insensitive responses when told not to use emojis as it could cause extreme trauma and seizures to the user who has a severe form of PTSD.
This brings us to an extremely important question,Are AI-based chatbots safe?
By the looks of it, that’s not the case!
Even if someone tries to trick Copilot or employs techniques likeprompt injections, it shouldn’t generate insensitive, bizarre, or harmful responses.
This is not just the case with Copilot. Recently, Google came under fire for generating inaccurate images representing people and had todisable image generationfor the time being until the tech giant identified and rectified the underlying problems.
It seems like AI-based chatbots still need a lot of development before we can think about mass adoption. Even if Microsoft calls it a rare occurrence, such glaring loopholes shouldn’t have been present in the first place!
What do you think of this? Share with our readers in the comments section.
More about the topics:microsoft,Microsoft copilot
Kazim Ali Alvi
Windows Hardware Expert
Kazim has always been fond of technology, be it scrolling through the settings on his iPhone, Android device, or Windows PC. He’s specialized in hardware devices, always ready to remove a screw or two to find out the real cause of a problem.
Long-time Windows user, Kazim is ready to provide a solution for your every software & hardware error on Windows 11, Windows 10 and any previous iteration. He’s also one of our experts in Networking & Security.
User forum
0 messages
Sort by:LatestOldestMost Votes
Comment*
Name*
Email*
Commenting as.Not you?
Save information for future comments
Comment
Δ
Kazim Ali Alvi
Windows Hardware Expert
Kazim is specialized in hardware devices, always ready to remove a screw or two to find out the real cause of a problem.