The tool will warn users before they post such content.
Twitter is currently testing a new moderation tool that will warn users that may be about to post replies that may contain “harmful”, as decided by Twitter, language. The warning will provide users with an option to revise their reply before it is published.
The company is calling it a “limited experiment” with the tool currently showing up only for iOS users. What constitutes as harmful language could be deduced by referring to the company’s hate speech policies and its Rules document.
The article shares that the move seems to be designed to “lightly encourage users to avoid unnecessary and inflammatory language that escalates feuds and might lead to suspensions”. A similar approach to such user behaviour can be seen in Instagram as well.
[2 minute read]