We asked tech companies to respond to Trump's call for a solution to mass shootings
Two mass shootings that took place over the weekend in El Paso, Texas and Dayton, Ohio have left many people searching for new ways to prevent these heinous, violent acts from happening. While many have pointed to the accessibility of firearms, others who are unwilling to consider the role that guns play in acts of gun violence are pointing to other sources to find solutions. In a speech on Monday, President Donald Trump called for social media companies to develop new tools that would be able to sniff out potential shooters before they have the opportunity to act.
"The perils of the Internet and social media cannot be ignored, and they will not be ignored," Trump said before calling on social companies like Facebook and Twitter to help in "identifying and acting on early warning signs." According to the president, he believes that law enforcement and tech firms should work together to "develop tools that can detect mass shooters before they strike." Trump also claimed that the "glorification of violence" through things like "gruesome and grisly video games" is to blame, a claim that is unsubstantiated and lacks both in evidence and support from experts. "It is too easy today for troubled youth to surround themselves with a culture that celebrates violence," Trump said. "We must stop or substantially reduce this, and it has to begin immediately."
In typical Trump fashion, he didn't provide any details on what these supposed tools might look like or what kind of action can be taken to limit access to media that glorifies violence. From his statement, it sounds as though he is calling on companies to proactively monitor content and flag activity that may be deemed suspicious.
Most sites do have filters and algorithms designed to help moderate content and may be able to identify certain keywords, but they aren't able to discern a person's intentions or determine the context of a post. Lyrics from a song may reference shooting someone, but posting that song isn't an indication that a person plans to actually carry out that action. Facebook uses artificial intelligence and machine learning algorithms to detect and remove terrorist content before human users identify and report it, and has more recently started to teach its automated moderators how to identify people displaying suicidal behaviors. Twitter likewise has leaned on machine learning tools to identify threatening and violent content posted to its platform. The company claimed earlier this year that it is identifying abusive content before users report it with more frequency than ever before. While these systems are getting better at mass-detection, they also lack the subtlety to be able to identify jokes and other context and often cast such a wide net that it would be difficult to effectively scrutinize and determine what content requires immediate action.
Law enforcement agencies are already relatively active in monitoring social media activity, as well. A number of large police departments around the United States were discovered to have partnered with a company called Geofeedia to monitor and analyze data from posts on Facebook, Twitter and Instagram. The tool was most notably used by Baltimore police to monitor the activity of protesters who took to the streets following the death of Freddie Gray. Civil rights groups like the American Civil Liberties Union (ACLU) objected to the technology and effectively got some uses of the service blocked while social media companies cut off Geofeedia's access to their data. That hasn't stopped police from finding other ways to keep tabs on social media activity. The Brennan Center has also documented 158 total jurisdictions across the country that have spent at least some resources to use social media for intelligence gathering and investigations. CityLab documented a low-tech way that some police forces have taken to use social media without using datamining services. In a slideshow put together by a former member of the Cook County Sheriff’s Office Intelligence Center that showed how police create fake accounts to "catfish" civilians and access information.
Trump's call for some sort of social media dragnet may align with a recent request for proposals placed by the FBI. The intelligence agency has asked for companies to help it create a "social media early alerting tool in order to mitigate multifaceted threats." The proposal suggested the tool could be used to "proactively identify and monitor" social media services to detect a "diverse range of threats to the U.S. National interests." It's not clear what sort of interest the FBI's request has generated or how willing social media companies will be to participate in such a program — assuming their participation is even required for such a service to operate effectively.
Mic reached out to major social media companies to gauge their reaction to the President's call for additional social media monitoring programs. Here are their responses:
Facebook, Instagram and WhatsApp
Facebook did not respond to request for comment and has not provided public comment on the president's speech. Recode reported that the company pointed to its Community Standards Enforcement Report in response to requests for information. In the report, which is part of Facebook's transparency efforts, the company states that it notifies law enforcement in cases where content on its platform presents "specific, imminent and credible threat to human life."
Google and YouTube
Google told Mic that it has been investing in policies and resources needed to protect YouTube users from harmful content. The company said that hate speech and content that promotes violence is prohibited under YouTube's Community Guidelines. The company recently updated its approach to hateful content to deal with videos that promote violent extremism and racial supremacism. According to Google, YouTube removed more than eight million videos in the first quarter of 2019 alone and did so before the majority of those videos received a single view.
Google declined to provide an on-the-record comment regarding President Trump's calls for companies to develop new tools to identify mass shooters before they act.
Twitter pointed Mic to a variety of figures regarding its enforcement efforts designed to curb terrorist content on its platform. Twitter's rules prohibit violent threats, specifically those made toward individuals or groups of people. According to the company, during its most recent reporting period, it suspended 166,513 accounts for violations related to promoting terrorism.
Twitter also noted that it does work with law enforcement and authorities around the world to help facilitate investigations when needed. The company stated that it has a team working 24/7 to support law enforcement efforts. Twitter publishes details on requests from authorities made to the company in its Twitter Transparency Report, which is published twice per year.
Twitter declined to provide an on-the-record comment regarding President Trump's calls for companies to develop new tools to identify mass shooters before they act.
A spokesperson for Reddit told Mic:
Reddit's site-wide policies prohibit content that encourages, glorifies, incites, or calls for violence. We are always evaluating and evolving our policies, and in the past several years we have significantly built out the teams responsible for enforcing those site-wide policies, proactively going after bad actors on the site, and creating engineering solutions partnered with people to detect and prevent them in the future.
While the companies did not directly address President Trump's call for increased social media monitoring, the responses provided do give some insight into the company's thinking. Touting existing efforts, changes in policies and flaunting flashy figures on content removal suggest these companies believe they are already doing the best they can at the moment — or at the very least are doing what is required of them already. It is worth noting that when asked about dealing with mass shootings, most companies pointed to their efforts to remove terrorist content. Those tools are typically thought of being used to deal with international terrorist organizations like Al-Qaeda or ISIL. Using those figures suggests that perhaps social media companies look at mass shooting incidents as acts of domestic terrorism — a term that some political leaders often choose not to use when describing these violent acts.