Facebook's decision to use AI to moderate content is a dicey strategy during coronavirus

Marc Piasecki/Getty Images News/Getty Images
Impact
Updated: 
Originally Published: 

In the midst of the coronavirus pandemic, just about everyone who still has a job is being told to do it from home if possible — resulting in a massive spike in internet usage, specifically for communication applications. That includes the people who serve as the first line of defense for content on social networks: moderators. Across the board, companies that rely on contract workers to manually review and make decisions on flagged posts have been sending those workers home, leaving much of the internet's biggest monitoring projects to artificial intelligence. The result could be a considerably different online experience.

It has only been a few days since companies including Facebook and YouTube embarked on the new AI experiment, and the impact of the change is already evident. Less than 24 hours after Facebook’s announcement about the move the site engaged in some overzealous flagging — though Facebook denied that the situation was related to the change in moderating systems. According to a report from TechCrunch, Facebook's filter started marking thousands of legitimate links as spam, blocking content from publications including USA Today, BuzzFeed, and Medium. In particular, the spam filter seemed to catch a lot of coronavirus-related content, preventing important information about the virus from being posted or shared in comments. Guy Rosen, Facebook's vice president of integrity, said on Twitter that the issue was the result of "a bug in an anti-spam system" and was "unrelated to any changes in our content moderator workforce." The posts have since been restored and the sites that were caught up in the spam filter have once again been given the green light.

Despite Rosen's assurances that the spam filter issue was unrelated to the moderator situation, it's hard not to wonder about the timing. In a blog post, Facebook proactively warned that while the company doesn't "expect this to impact people using our platform in any noticeable way," there may be "some limitations to this approach and we may see some longer response times and make more mistakes as a result." It's also hard to remove this situation from Facebook's previous efforts to go human-free. Turn back the clock to 2016, when Facebook announced that it would ditch human curators for its trending news module and would hand controls over entirely to algorithms. The results were almost immediately disastrous. According to The Guardian, the first weekend with the fully automated moderators at the helm resulted in a fake story about Megyn Kelly getting fired from Fox News for supporting Hillary Clinton and a video of a man masturbating with a McDonald’s chicken sandwich reaching the top of the trends section.

Those results are less than inspiring — though in a way understandable. Facebook's algorithms started pushing the kind of content that people were spending a lot of time looking at and sharing, and as we know there's no accounting for people's tastes. But with similar systems in charge of moderator duties, the goal won't be to highlight content — it will be to suppress the stuff that isn't supposed to come through. That could present much more of a challenge, particularly given Facebook's reliance on humans — tens of thousands of them — to handle the most sensitive of content.

SOPA Images/LightRocket/Getty Images

According to a report from The Verge, Facebook's human moderation team is exposed to a considerable amount of unsavory content, ranging from hate speech to acts of violence to graphic pornography. That includes videos and other depictions of disturbing actions — death, rape, and mutilation. This type of imagery comes across the screens of moderators daily, for hours on end, all in the name of keeping it from reaching the feeds of Facebook users. And while it would be better for all those involved if the content could be reliably censored by artificial intelligence and never require human interaction, Facebook's automated defenses have failed in the past. When Facebook launched its Facebook Live platform, it largely decided to forgo human moderators. The result, relatively predictably, was upsetting acts finding their way onto the platform. According to the Wall Street Journal, more than 50 acts of violence — including instances of murders, suicides, and sexual assault — were broadcast on Facebook Live during its first year of operation.

During a time when many will be experiencing isolation, there is a special concern for those struggling with mental illness. Signs of people feeling distanced from their friends, family, and the familiar fixtures of their lives are already starting to show. On a press call held yesterday, Zuckerberg told reporters that calls on WhatsApp and Facebook Messenger have doubled their normal rate and surpassed the spike the apps usually experience on New Year's Eve — their highest usage period of the year. According to a report from The Verge, dealing with self-harm content has become Facebook's top priority. While the company has sent home its team of contractors who moderate content, Mark Zuckerberg said that full-time employees will be focused largely on reports of self-harm related content on the platform. The company is currently anticipating a spike in those cases and may require additional staff to help deal with it. As the company focuses on this area, it warns that other reports might slip through the crack, which could mean more spam or hate speech or fake accounts cropping up across the platform.

While Facebook shifts its remaining human moderators to protect against self-harm, the company's army of contractors are now at home, working remotely to help train Facebook's machine-learning classifiers so the company can ramp up its automated moderation efforts. They are also experiencing isolation like the rest of the world, but continue to be exposed to potentially troubling and challenging content while they train Facebook's algorithms — and are now doing so without guaranteed support for their own mental health. On yesterday's press call, Zuckerberg said that it will be "very challenging to enforce that people are getting the mental health support that they needed" now that they are working from home rather than out of an office. These are the same contractors who were excluded from receiving the $1,000 pay bonus that Facebook is providing to its full-time employees who are now working from home, according to a report from The Intercept.

Facebook is not the only company that will face challenges without its full team of moderators on duty. YouTube announced that it will also lean on AI moderators for the time being, and warns that mistakes will happen. The company said that "users and creators may see increased video removals, including some videos that may not violate policies." Out of an abundance of caution, the company plans to be more cautious about the videos it promotes and will exclude some unreviewed content from search results, homepage content, and recommendations. Given that human moderators working for YouTube are typically exposed to hours of gruesome content each day and are in charge of making sure those videos don't make their way onto the platform, it makes sense that YouTube will be extra careful with its automated systems.

Without the human moderators serving as the buffer for upsetting, violent, and illegal content, the internet's automated moderators will have to kick into high gear. That likely means a lot more content getting blocked and taken down. In some cases, even legitimate and essential information could get caught in these wide sweeps, making it harder to get access to reliable sources at a time when it is crucial that information be widely and easily accessible. The internet without humans on hand to sift through subtleties will be moderated with a hammer rather than a scalpel. Given how valuable those human defenses appear to be, it should be incumbent upon the companies that employ them to make sure they are properly compensated and given the care and support that they require.