Twitter Continues to Fail at Handling Harassment

Written by

1500971577-7657-d-in-the-sand-e1496658426677

Note: we are intentionally not sharing specific examples we have seen in order to protect those who have been recent targets of harassment.

Imagine this. You use Twitter in teaching, or for professional development, or at conferences. It’s been an important environment for you, and you’ve encouraged others at your institution to see that Twitter isn’t all trolling and trivia.

And then there it is. A tweet about you that’s so laughably off track as a claim you hope everyone sees it this way. But it’s followed by another and another, and the claims are escalating, and dragging in others. They’re becoming more graphic, more stupid, more bizarre, and more personal, and now they’re directed right at your employer and so they’re impossible to ignore.

At the beginning of 2017, Twitter announced that it would start taking abusive content more seriously. In a March 1, 2017 blogpost Twitter’s VP of Engineering said “We aim to only act on accounts when we’re confident, based on our algorithms, that their behavior is abusive. Since these tools are new we will sometimes make mistakes, but know that we are actively working to improve and iterate on them everyday.”

Among its promises? Stopping abusers from creating new accounts and a continued strong emphasis on hiding abusive content from the abused/harassed person. But as Bill Fitzgerald recently wrote, the latter aspect is problematic: when someone reports a tweet or account, Twitter hides the abusive tweets from the reporter (whether or not they block or mute) but not from the rest of the world. This creates the impression the reporting is successful, when in fact the problem may continue. Moreover, Twitter responds to reports in inconsistent ways: sometimes on your timeline, sometimes via email, and the way to find these on your phone app are different from how you find them on the web.

Addressing harassment is understood to be a priority for Twitter. But whatever progress is being made is hidden behind the obfuscation of Twitter’s reporting system. We see three key problems for educators and professional users beyond those Bill Fitzgerald outlines.

First, the options for reported targeted harassment indicate a narrow definition of harassing behavior. Users who are facing malicious or libelous attacks from multiple fake accounts simultaneously, but who are being neither doxxed nor directly threatened with violence, cannot rely on Twitter to find offensive tweets in violation of their rules. To be clear, we’re not talking about disputes, disagreements, even name-calling, but tweets that clearly breach Twitter’s standard on hateful conduct, where Twitter claims not to tolerate “repeated and/or or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone.”

Meanwhile, it seems that accounts can avoid Twitter’s rules by mixing abusive tweets with benign auto-tweets to avoid having their account shut down as one that is “primarily” used for abuse or harassment. Twitter seems unwilling to go the extra mile of seeing the connections between harassing accounts even when reported with strong evidence (like accounts tweeting from identical IP addresses!).

Second, when Twitter does take action, the action it takes is unclear. The message that an account has been found to violate rules doesn’t clarify whether the account was taken down, or specific tweets were removed. There is no opportunity to review the decision, and as Twitter may also have selectively blocked tweets from a reporter’s search that are still visible to other accounts, the reporter has no means of understanding what, if anything, has been done. This leaves the reporter having to logout of Twitter, search incognito, and see what the end result has been. Imagine going to the police to report someone for spreading lies about you, and the police respond agreeing that the person has violated the law, and their action is to hide this person’s lies from you, and not tell you what action they will take!

Third, Twitter continues to justify releasing very little information on how its system actually works by claiming that details make it easier for harassers to “game the system“. As a result it’s hard to know whether a tweet has been carefully analyzed by a human moderator, including in relation to other tweets or accounts that together form a pattern of harassing behavior … or not. Very little is known about the digital labor that lies behind Twitter’s reporting system, or even when this is labor involves human judgment.

To us, this confirms the impression that Twitter’s focus is in managing impressions: to deal with public criticism of a platform that has a reputation for harassment, especially where celebrities and other blue-tick users are concerned, while at the same time claiming robust defense of freedom of speech.

As a result, there is a sharp contrast between the limitations of the reporting system and Twitter’s statements in relation to providing a safe environment for families, teen users, users vulnerable to self-harm, and professional or educational use. And across many of Twitter’s policy statements, blog posts and protocols, one common theme persists: that most problems can be solved by the target of harassment being shielded from the concerning content – a “head in the sand” approach that fails entirely if the problem is what’s being said about you to your professional peers, your students, or your community.

To us—and both of us use Twitter extensively as open educators—this raises serious questions about the appropriateness of Twitter either as a professional platform, or a platform suited to educational use. Specifically, we can see that attempting to have it both ways in relation to action against harassment, Twitter is relatively helpless against a determined individual harasser with time on their hands, and sufficient grasp of Twitter’s own rules to understand how to get around them. Twitter has failed its promise.

We’ll keep thinking and talking about this…

We are thinking about this a lot as we get ready to co-facilitate a week as part of the month-long online Digital Citizenship (#DigCiz) event, starting June 12, and later a track focused on networked learning at the Digital Pedagogy Lab Institute(DPLI) at University of Mary Washington in August inshallah. Readers may also find this newly announced DPLI track led by Kris Shaffer on Data Literacies of interest – and check out Kris and Bill Fitzgerald’s recent article about identifying bots and sockpuppets, which, you know, maybe Twitter can learn from?

 

Article Tags:
Article Categories:
Edu-news

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.