Here’s what YouTube is doing about videos endorsing terror groups

Impact

YouTube just dropped a few updates on its plan to curb videos encouraging extremist ideology. It’s great timing, considering the company had a scandalous run-in with clients last month — the platform refunded major advertisers after their videos appeared as pre-roll before extremist content. Imagine a L’Oreal or McDonald’s ad appearing before a video that attempts to rally people for a violent cause. It was a bit of a PR nightmare.

But even a month before that incident, YouTube shared a four-step plan to combat terrorism.

As the company put it in its June announcement, “the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.”

Here’s how the platform says it’s dealing with dangerous content:

Using machine learning techniques to pick out terrorist content

Machine learning, a faction of artificial intelligence, allows YouTube to classify uploaded content automatically and then delete videos considered to contain “extremist and terrorism-related content.” Though it’s not a perfect line of defense, the platform recently announced that it has improved its machine learning tech, which is now responsible for over 75% of all videos taken down for violent extremism in the last month. All of those videos were removed before a single user flagged them, YouTube said, and it has more than doubled the number of extremist videos taken down in the last month.

Adding more humans to the equation

YouTube has given special user abilities to people in their “trusted flagger program” for a few years now. The program is an idea borne out of the reality that some vigilante YouTube users were particularly keen on flagging problematic content, and three times more accurate at finding videos that actually violate the platform’s community guidelines than the average person. Those users have the ability to report more than one video at once, and YouTube has since extended its trusted flagger program to include advice from NGOs and other institutions such as the Anti-Defamation League.

Such experts can help make “nuanced decisions about the line between violent propaganda and religious or newsworthy speech,” YouTube wrote in a June statement.

Cracking down on gray-area videos

Not all uploads are explicit terrorist content, but they may still contain hateful or supremacist language. Those videos are at the center of a classic first amendment debate — where do tech giants draw the line, and how can they allow free speech without endorsing hate speech?

For these exact cases, YouTube has basically responded by committing to making these videos harder to find and engage with. These videos “will not be monetized, recommended or eligible for comments or user endorsements,” YouTube stated. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”

Convincing people not to join terror groups

A lot of content from terrorist groups is dedicated to recruiting new members using propaganda tactics. So when an ISIS sympathizer punches in search terms that sound YouTube’s internal alarm, for example, the platform has decided to redirect users to playlists with “curated” videos that “directly confront and debunk violent extremist messages.”

“In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages,” YouTube stated in June.

These four steps are an example of a major tech company taking some level of responsibility for its content, but future research will ultimately have to show whether this plan as a whole actually works.