Why YouTube and social platforms are struggling to deal with hate speech

Google
Culture
Updated: 
Originally Published: 

Over the last few days, YouTube's position on hate speech has been put to the test. The results have not been encouraging. And they're just further evidence that tech companies are at a complete loss as to how to enforce their own rules.

The latest controversy to take the platform started last Thursday when Vox writer Carlos Maza tweeted a compilation of clips from conservative commentator Steven Crowder's YouTube channel. The video showed Crowder spewing off a series of insults directed at Maza. The YouTuber refers to him as an "anchor baby," "an angry little queer," a “little queer,” a “lispy sprite,” “Mr. Gay Vox,” “Mr. Lispy queer from Vox,” a “gay Mexican,” “gay Latino from Vox” and a variety of other bigoted attacks. Maza, who is gay and Latino, also noted that Crowder's fans flooded his phone with text messages demanding that he debate Crowder after his information was posted online last year.

Crowder's campaign against Maza has been going on for two years. While Maza said he reported Crowder's videos manually, no action was ever taken and he never heard from YouTube about the issue. After his collection of clips went viral, YouTube finally took to reviewing Crowder's content. It found that while Crowder's language was "clearly hurtful," they didn't violate any policies. In an email to Gizmodo, a spokesperson for the company further explained that it viewed Crowder's comments as an attempt to "respond" to Maza's opinion rather than to "harass or threaten" him.

The decision is a strange one given YouTube's existing rules. According to the company's harassment and cyberbullying policy, "Content that is deliberately posted in order to humiliate someone" and "content that makes hurtful and negative personal comments/videos about another person" are prohibited. Crowder, who is actively selling T-shirts that say "Carlos Maza Is A F*g," somehow does not fall under these rules by YouTube's standards.

YouTube said that it makes these decisions in part by trying to determine if the criticism is focused on debating opinions or is intended to be malicious. In YouTube's view, Crowder didn't explicitly incite hate or encourage harassment in his videos and was instead simply responding to criticism. The company took extra pains to point out that Crowder never revealed any of Maza's personal information in his videos even though that is only one of a number of criteria that the company uses to classify content as harassment.

It may be the case that Crowder didn’t specifically tell his viewers to harass Maza, but it doesn’t take saying the words to encourage the action. Crowder directed slurs and derogatory language at Maza and his followers followed suit. The Anti-Defamation League (ADL) told Mic that approximately 15,000 tweets were directed at Maza between May 31 and June 3. The organization reported that many of them contained “hateful and homophobic” comments. It also represented a significant uptick in comments directed at Maza. The ADL said that he received approximately 45,000 tweets mentioning him from June 1, 2018 to June 1, 2019.

“Reviewing the videos, it's clear that they are in violation of YouTube's terms of service as stated. It is also clear the language in the videos was created to harm Maza and the LGBTQ community, and not to engage in meaningful debate," ADL CEO Jonathan Greenblatt said in a statement.

Despite declaring that Crowder did not violate its policies, YouTube decided on Wednesday that there was enough wrong with his videos to completely demonetize his channel. YouTube said Crowder's "pattern of egregious actions has harmed the broader community," though it didn't detail what actions it found to be egregious. It also said that his behavior violates YouTube Partner Program policies. However, the company clarified that it will reinstate his ability to monetize videos if he removes links to T-shirts that target Maza and addresses other issues.

YouTube didn't respond to multiple requests for comment regarding whether Crowder's channel has any strikes for violating the company’s harassment policy. YouTube's three-strike system punishes channels for violating its policies and will ban users who receive three strikes in a three-month period.

This entire situation is indicative of a much broader issue that faces YouTube and other tech platforms. While these companies are perfectly capable of crafting seemingly sound policies designed to prohibit bad actors on their platforms — YouTube even tweaked its hate speech policy in the midst of the controversy to further crack down on white supremacists and conspiracy theorists — they appear to have no idea how and when to enforce them.

YouTube will tell you that it has no issue removing hateful content. In an email to Mic, the company highlighted the fact that it has removed 47,443 videos and 10,623 accounts for violating its policies on cyberbullying and harassment in the first quarter of 2019. That accounts for 0.4 percent of the accounts and 0.6 percent of the videos removed from the platform during that time, per YouTube’s transparency report. But when it came to Crowder, the company simply couldn’t bring itself to remove the videos despite most interpretations of the content leading to the conclusion that they are in violation of YouTube’s policies.

When YouTube finally did punish Crowder, the decision was to take away his access to advertisement revenue. As Maza pointed out on Twitter, demonetization isn't enough to dissuade these behaviors. Crowder has multiple streams of revenue available to him outside of YouTube, and his fan base will likely only be more motivated to support him now that he can claim to be the victim of YouTube's decision. Because his videos with the offending content are still up on YouTube, the platform’s algorithm will continue to drive people to it. It will continue to recommend his videos to like-minded people who will support him financially by buying merchandise or by following him to other platforms where he can still make money.

The handling of Crowder is reminiscent of YouTube and other social media companies completely bungling their handling of Alex Jones, the conspiracy theorist who regularly used bigoted language, pushed conspiracy theories, and harassed the victims of tragic events, including the families who lost children in the Sandy Hook mass shooting.

Jones was regularly and rather explicitly in violation of YouTube’s rules regarding harassment and hate speech, but it took until 2018 for the company to issue a strike against Jones’ channel and give any sort of punishment. Jones quickly racked up a second strike that limited his ability to post videos, but those red marks in his ledger were wiped away after three months.

YouTube did remove some Jones-related content as well — just not from his own channel. The company took down a compilation video showing Jones’ attacks directed at the victims of the Sandy Hook massacre that was uploaded by Media Matters to highlight some of the outlandish commentary that Jones got away with. According to YouTube, that video violated its policy on harassment and bullying — though it was eventually restored. Meanwhile, the videos in which Jones actually says those things remained up on his channel, according to The Outline. Jones was eventually banned from YouTube, but only after Apple and Facebook took action to boot him from their platforms first.

Facebook, YouTube, and other social platforms like Twitter and Reddit have all made a considerable to-do lately about cracking down on hate speech. Facebook launched a new moderation team dedicated to identifying hateful content on its platform. Reddit quarantined white supremacy communities to isolate them from the rest of the site. Twitter suspended some alt-right accounts for violating its policies.

Despite these nods toward actual action, a considerable amount of similar content remains and most of the companies don’t seem to have the stomach to follow through on the rules they have in place. Perhaps that’s because of the uncomfortable truth that in order to follow through on these rules, the companies will have to make some uncomfortable decisions. A report earlier this year from Motherboard suggested that Twitter hasn’t used its algorithms to flag white supremacist content for fear that Republican politicians would be caught in the mix.

If these companies want to get serious about removing hate speech from their platforms, it has to apply to everyone. It’s becoming increasingly clear that they would rather deal with the public backlash from mealy-mouthed decisions than enforce policies in full and deal with some unpleasant truths about the types of people who have gathered on their platforms.