fbpx
News

Twitter testing new feature that advises users to rethink ‘harmful’ tweets

The social media giant seems to be taking a page out of Instagram's book with this feature

Twitter logo on a phone

Twitter is testing a new feature that warns users before they post a tweet that may include harmful language.

The social media giant is currently testing the feature with a select number of iOS users. Twitter says that the pop-up prompt is supposed to give users the option to revise their reply before they post it.

It’s unclear what Twitter constitutes as “harmful” content, but it is likely similar to what the platform thinks is considered hate speech, abuse, and harassment, which is all outlined in its policies.

However, Twitter has noted in the past that it wouldn’t remove content simply because it is offensive because “People are allowed to post content, including potentially inflammatory content, as long as they’re not violating the Twitter Rules.”

As The Verge outlines, this new feature doesn’t seem to be attempting to prevent extreme forms of harmful content, and is instead directed more towards encouraging users to rethink their inflammatory language.

It’s also interesting to note that Twitter isn’t the first social media platform to add a feature like this. For instance, Instagram started using AI last year to determine if a caption is considered offensive. If the AI system found the content to be harmful, it prompts the user to rethink their caption.

Source: Twitter Via: The Verge 

MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.

Related Articles

Comments