
Today, Twitter’s taking the next steps towards combating abuse.
Highlighted by a new, expanded mute feature which enables users to block out any words, phrases, hashtags, @handles and emojis that they don’t want to see.
As explained by Twitter:
“Twitter has long had a feature called “mute” which enables you to mute accounts you don’t want to see Tweets from. Now we’re expanding mute to where people need it the most: in notifications. We’re enabling you to mute keywords, phrases, and even entire conversations you don’t want to see notifications about, rolling out to everyone in the coming days. This is a feature we’ve heard many of you ask for, and we’re going to keep listening to make it better and more comprehensive over time.”
And as we noted then, it’s not a perfect solution – it doesn’t stop such abuse from happening and users can get around it by changing the spelling or using different tactics (you can see in the Taylor Swift example above that the snake emoji has been muted, but the words ‘cobra’ and ‘ekans’ are still present), but it is a model that’s shown promise elsewhere.
Initial reports of Twitter working on a ‘mute words’ option filtered out late last month when it was accidentally made available to some users ahead of time.
But Twitter’s also taking this a step further – in addition to keyword blocking, Twitter’s also adding a new option which will enable users to mute entire conversations.
The tool will enable users to stop receiving notifications from a specific Twitter thread without removing the thread from their timeline or blocking anyone. Users will be able to mute any conversations in which they’re included (where their @handle is mentioned).
The options will provide users with more ways to filter and customize their Twitter experience, and indeed, to feel safer. But as noted, it’s not a solution, as such. Those abusive comments will still exist, even if hidden from view.
To combat the issue on a deeper level, Twitter’s also added a new ‘hateful conduct’ reporting option. Now, when users go to report a tweet, they’ll see a new ‘directs hate’ option to denote why the tweet in question is harmful.
Twitter’s also retraining its support teams, with special sessions on ‘cultural and historical contextualization of hateful conduct’, and is implementing an ongoing refresher program to ensure those lessons stay up to date – this is especially important as such language and terms are always evolving.
“We’ve also improved our internal tools and systems in order to deal more effectively with this conduct when it’s reported to us. Our goal is a faster and more transparent process.”
In future, there may be other solutions – some have suggested that Twitter could reduce abuse by removing anonymity from the platform.
The platforms themselves will continue to evolve and advance their tools and options on this front, but we, as users, can also help by flagging and reporting such incidents.
Source: www.socialmediatoday.com