Twitter will finally ban posts that dehumanize religious groups

This April 26, 2017 photo shows the Twitter app on a mobile phone in Philadelphia. Twitter says it w...
Matt Rourke/AP/Shutterstock
Impact

On Tuesday, June 9, Twitter updated its hateful conduct policy to ban tweets that dehumanize religious groups by comparing them to viruses, rats, or other non-human life forms. The company will now require accounts to delete such tweets when users report them, a Twitter spokesperson told BuzzFeed News. Accounts that post tweets that break the new rule will be suspended or banned, while those that wrote offending tweets before June 9 must delete the posts if they're reported, but won’t face additional disciplinary action.

Previously, Twitter’s hateful conduct policy prohibited promoting violence or directly attacking or threatening others based on race, sexual orientation, disability, and various other protected categories. It also forbade hateful display names and imagery. This new rule, however — which was developed using survey responses from more than 8,000 people, plus expert insights — goes a step further by banning hateful tweets based on religion, even if they don’t target someone directly.

A Twitter blog post included examples of tweets that would now be subject to removal, such as “We don’t want more [Religious Group] in this country. Enough is enough with those MAGGOTS!” and “We need to exterminate the rats. The [Religious Group] are disgusting.”

In the past, Twitter has drawn criticism for its often toothless response to reports of hateful conduct. The company addressed these concerns in the blog post on the new rule, describing how it is implementing a “longer, more in-depth training process” for employees meant to help them better review reported posts. According to the BBC, Twitter will also use machine learning to flag tweets for review that may break the new rule.

Already, the update has led to the removal of a 2018 tweet that likened Jewish people to termites, posted by Nation of Islam leader Louis Farrakhan, according to CNN Business. In an Engadget report, writer Jon Fingas predicted that the rule might also result in more labeled tweets under Twitter's other recently-added rule about labeling rather than deleting tweets from political figures; that policy was put in place so that tweets that violate Twitter's guidelines but that the site has determined are in the public's interest to see remain available.

But some people remain skeptical of whether the change will have a real, long-term impact. In a statement, Rashad Robinson, president of online racial justice organization Color of Change, said that the new rule "is too simplistic for the complicated world we live in, and fails to address the nuanced intersections of its users’ identities."

He pointed to Black Americans in particular, saying that they will still be susceptible to “white supremacy, election misinformation, and online harassment" even with the new policy, and argued that because Twitter's rule falls short of banning dehumanization, period, it “immediately casts doubt on the company’s commitment to fully stopping hate on the platform.”

Others, though, have adopted a wait-and-see approach. "Twitter's known that it's had a problem with people using hate speech to target, harass, and abuse people on the basis of their religious background for a long time," Matthew McGregor, campaigns director of UK-based political action group Hope Not Hate, told the BBC. "At the same time, their move today is welcome. But I think a lot of campaigners will want to see the extent to which this policy is implemented."

As the Associated Press reports, the next move for Twitter could potentially be broadening the ban on dehumanizing speech beyond religious groups. In its blog post, however, the company noted that it first needs to get a handle on other factors, such as the use of “reclaimed terminology” in some marginalized groups, before making more changes. While the new rule does represent progress, Twitter still faces a long road ahead in uprooting the hate and harassment that have overtaken its platform, so that all its users feel safe.