fbpx
News

Snapchat’s new AI chatbot exposed for using slurs, gaslighting users

'My AI' is currently available to all users for free

If you haven’t heard already, Snapchat is getting in on the recent AI chatbot craze with its own model known as “My AI.” Now, users are reporting some bizarre and inappropriate behaviour from the chatbot, including its use of racial slurs and even instances of it pleading with users to turn themselves in to the authorities.

Screenshots were posted to Twitter showing My AI responding with an anti-Black slur when asked to create an acronym with the first letter of the user’s corresponding sentence. The chatbot then tried to backtrack by stating that its own answer was against the company’s policy towards hateful content.

https://twitter.com/tracedontmiss/status/1649462661199921160?s=20

Although concerning, this looks more like a case of users baiting My AI into saying something controversial rather than a genuine problem with the chatbot. Many similar instances have surfaced online, with cases of Snapchat’s AI ‘gaslighting’ users.

The first time that the “My AI” conversation is opened, users must acknowledge a disclaimer about the bot’s capabilities and limitations. It reads, “My AI may use information you share to improve Snap’s products and to personalize your experience, including ads. My AI is designed to avoid biased, incorrect, harmful, or misleading responses, but it may not always be successful, so don’t rely on its advice.” 

The strange responses from the bot have taken over the internet, with cases of My AI pleading with users to turn themselves in when they confess to murders and even reacting harshly to bomb threats.

https://twitter.com/ChaslingYT/status/1650178023377838080?s=20

Snapchat’s My AI was developed using OpenAI’s ChatGPT, which has been known to get facts wrong regularly and spread misinformation accidentally. OpenAI founder Sam Altman went on record to say that ChatGPT is a “horrible product.”

While the chatbot’s legitimacy is concerning, Snapchat is being questioned on why it would bring the product to its audience, which often consists of minors. In most of the above cases, inappropriate and fake prompts are given to the bot in order to elicit a similar response.

A Snapchat spokesperson said that users who intentionally misuse the service could be temporarily restricted from using the bot.

Image credit: Shutterstock

Source: @tracedontmiss Via: Vice

MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.

Related Articles

Comments