Twitter is adding a new “safety mode” aimed at keeping people safe from seeing abusive posts.
This feature will automatically block accounts that use potentially harmful language such as insulting or hateful comments, or tweeting in a repetitive or uninvited manner.
Those automatic blocks will remain in place for seven days, which means the original poster will be protected from seeing those tweets. When a user is blocked, they are unable to send direct messages or posts to a user, and are also banned from following their account or viewing their tweets.
Initially this feature will be rolled out to a small group of users on the web versions of iOS, Android and Twitter. Those people can be contacted via Twitter to ask about their experience with the feature.
Users will be given the option to turn the feature on or off. It also takes existing relationships into account, Twitter said, with the meaning that accounts people follow or interact with regularly won’t be caught in automatic blocking filters.
It will also provide information about who has been blocked and for how long, so that any erroneous automatically blocked accounts can be corrected. “We won’t always get this right and we can make mistakes,” Twitter said in its announcement.
The company said it was introducing the feature as part of a broader push to encourage “healthy conversations” on its platform.
“While we have taken steps to give people more control over their security experience on Twitter, there is always more to be done,” said Katie Minshall, head of UK public policy at Twitter. “As part of our work in this area, today we are introducing Safety Mode; a feature that allows you to automatically reduce disruptive conversations on Twitter, which in turn improves the health of public conversations. Is.
“Today’s roll out will be for a limited feedback group, so we can get important information ahead of a wider launch. We want to include this feedback to make sure the safety tools we’re developing are as effective as possible. Really empower people and make them feel comfortable in public conversations.”
The company said it had consulted with a variety of people on the feature prior to launch “with expertise in online safety, mental health and human rights”. It also allowed the company to “think through ways to address potential manipulation of our technology”, it said.
Credit: www.independent.co.uk /