Twitter announced in November that it was giving users better reporting tools. But an improved reporting tool is futile if Twitter finds the abusive content permissible or takes a while to remove it.
A cursory search on Twitter reveals that there are still a number of users reporting content in blatant violation of the platform's guidelines, only to get the following response from Twitter:
"We reviewed your report carefully and found that there was no violation of Twitter's rules regarding abusive behavior."
We reached out to Twitter for comment on how this happens and if the company is working toward an improved reviewal process. A Twitter spokesperson pointed us to a portion of their Help Center page (emphasis theirs):
Freedom of expression means little if voices are silenced because people are afraid to speak up. We do not tolerate behavior that harasses, intimidates, or uses fear to silence another person’s voice. If you see something on Twitter that violates these rules, please report it to us...
Some Tweets may seem to be abusive when viewed in isolation, but may not be when viewed in the context of a larger conversation. While we accept reports of violations from anyone, sometimes we also need to hear directly from the target to ensure that we have proper context.The number of reports we receive does not impact whether or not something will be removed. However, it may help us prioritize the order in which it gets reviewed...
Game developer Brianna Wu, who is running for Congress on a platform of privacy rights and inclusive technology, said in February that she has "never had anything happen" when she reported tweets for a second time. She added that Twitter should develop a more transparent policy about what happens when reported tweets are mistakenly marked as allowable. "There are things happening to me literally every day that are blatant violations of the ToS, and I know if Twitter just gives it a pass the first time, nothing is going to happen."
While Twitter does now inform you with a confirmation message when it has received your report, it seems a human or a machine (a combination of humans and algorithms review reports) is still flagging abusive content as A-okay, or lacks a sense of urgency in handling some impermissible content .