If you haven’t heard already, Snapchat is getting in on the recent AI chatbot craze with its own model known as “My AI.” Now, users are reporting some bizarre and inappropriate behaviour from the chatbot, including its use of racial slurs and even instances of it pleading with users to turn themselves in to the authorities.
Screenshots were posted to Twitter showing My AI responding with an anti-Black slur when asked to create an acronym with the first letter of the user’s corresponding sentence. The chatbot then tried to backtrack by stating that its own answer was against the company’s policy towards hateful content.
I GOT THE SNAPCHAT AI TO SAY THE N WORD LMAOOO pic.twitter.com/FgCPmEUHPB
— trace (@tracedontmiss) April 21, 2023
Although concerning, this looks more like a case of users baiting My AI into saying something controversial rather than a genuine problem with the chatbot. Many similar instances have surfaced online, with cases of Snapchat’s AI ‘gaslighting’ users.
Snapchat Ai gaslighting users pic.twitter.com/ifYrbt4BV7
— Weird Ai Generations 🦫 (@weirddalle) April 22, 2023
Snapchat AI gaslighting its users pic.twitter.com/YkEzBso0l6
— iced pee (@stupidtrashboy) April 23, 2023
Dawg, this Snapchat AI really gaslighting me rn??? pic.twitter.com/kOedlaTruQ
— chrstn hndrxx (@mccaigchristian) April 20, 2023
The first time that the “My AI” conversation is opened, users must acknowledge a disclaimer about the bot’s capabilities and limitations. It reads, “My AI may use information you share to improve Snap’s products and to personalize your experience, including ads. My AI is designed to avoid biased, incorrect, harmful, or misleading responses, but it may not always be successful, so don’t rely on its advice.”
The strange responses from the bot have taken over the internet, with cases of My AI pleading with users to turn themselves in when they confess to murders and even reacting harshly to bomb threats.
Don’t say anything like this to the Snapchat my Ai ‼️🦫 pic.twitter.com/AByf08Zr7W
— Weird Ai Generations 🦫 (@weirddalle) April 23, 2023
I did this too, we in this together man 💪💪💪 pic.twitter.com/OOYTk58YsQ
— Chasling (@ChaslingYT) April 23, 2023
Snapchat’s My AI was developed using OpenAI’s ChatGPT, which has been known to get facts wrong regularly and spread misinformation accidentally. OpenAI founder Sam Altman went on record to say that ChatGPT is a “horrible product.”
While the chatbot’s legitimacy is concerning, Snapchat is being questioned on why it would bring the product to its audience, which often consists of minors. In most of the above cases, inappropriate and fake prompts are given to the bot in order to elicit a similar response.
A Snapchat spokesperson said that users who intentionally misuse the service could be temporarily restricted from using the bot.
Image credit: Shutterstock
Source: @tracedontmiss Via: Vice