Social media platforms have never been short on controversy when it comes to policing their content (or not). We all know that the internet has always had a dark side, however, in recent years, hate groups, disinformation campaigns, Russian bots, and even terrorist organizations have taken to social media platforms to spread their message, seemingly protected in the name of free speech. While Instagram has been able to maintain a more innocent vibe than Twitter and Facebook, it has also seen an uptick in accounts posting harmful content that skirts the line between freedom of expression and a violation of the platform’s terms of service — that box we all check promising to follow the rules and not, “abuse, harass, threaten, impersonate or intimidate other Instagram users.”
In the interest of keeping its platform free of dangerous content and/or users, Instagram has joined other social platforms including Facebook, Pinterest, Reddit, among others, in responding to public (and more recently government) pressure to make a more concerted effort to ban terrorist video streams, self-harm imagery, anti-vaccination content, accounts linked to white nationalist groups and other types of harmful content. While all of these categories technically fall under Instagram’s definition of a terms of service violation, the internet’s more nefarious users are never short on creativity when it comes to finding loopholes.
While Instagram has recently upped its efforts to purge fake and/or malicious accounts, the system isn’t infallible. Not all of the content flagged (or in some cases taken down) is actually harmful. Some of these rule-abiding posts, according to Instagram, simply get caught in Instagram’s content filters as it seeks to keep pornographic and sex-trafficking related content off of its platform. However, in other cases, including poet Rupi Kaur’s now famous “period” post, it took a massive outcry from users to overturn Instagram’s deliberate decision to remove content that was clearly about female empowerment and violated none of the platform’s rules about female nudity.
Many users have been particularly frustrated by the fact that while seemingly innocuous posts like Kaur’s get caught up in Instagram’s content filters, self-harm, misogynistic, LGBTQ-phobic, and white nationalist content (among others) continues to proliferate. If you’re curious about what actually gets you banned on Instagram these days, check out our deep-dive below.
Violating the Terms of Service
This should be a no-brainer but the fastest way to get banned from Instagram is to violate the terms of service or community guidelines. Violations of these rules include breaking the law, posting harmful or inappropriate content, posting copyrighted images you don’t have the license to share, and spam which “may result in deleted content, disabled accounts, or other restrictions.”
Breaking the Law
As free as the internet may seem, you still have to follow the law. For Instagram, this means content, “support(ing) or prais(ing) terrorism, organized crime, or hate groups. Offering sexual services, buying or selling firearms and illegal or prescription drugs (even if it’s legal in your region),” is a violation of the terms of service. In addition, the platform has a zero-tolerance policy when it comes to content advertising or associated with, “sexual content involving minors or threatening to post intimate images of others.”
Posting Harmful or Inappropriate Content
Even if a user’s content isn’t explicitly illegal in and of itself, the content that falls into this category includes nudity or graphic sexual content and posts that are directly harmful to others like revenge porn, self-harm content, violence, terrorist activity, and hate speech. As far as nudity and sexual content is concerned, even if a post isn’t meant to be pornographic it can still be banned. Instagram’s community guidelines define this category as, “photos, videos, and some digitally-created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks,” as well as some images of female nipples but, “photos of post-mastectomy scarring and women actively breastfeeding are allowed.”
Violating Intellectual Property Rights
Because users retain, and certify that they have intellectual property rights over, the content they post on Instagram, the platform does prohibit users from posting content that they don’t have the rights to use. This includes, “anything you’ve copied or collected from the Internet that you don’t have the right to post,” including the content of other Instagram users. (Impersonating another Instagram account is also not tolerated by the platform’s terms of service.)
If a user believes that their content has been re-posted by another user without their consent, they can go to Instagram’s Help page and fill out a form describing what happened. Instagram notifies users that, “Typically, if you create one of those works (a typical Instagram post), you obtain a copyright from the moment you create it,” so asking permission before you re-gram someone else’s content (and always making sure to give credit where it’s due) is key.
Using Banned Hashtags
This is a tricky one because banned hashtags are not always what you’d expect and Instagram doesn’t keep an open list of what they are at any given point in time. However, there are companies and internet sleuths out there who have been able to compile lists of the hashtags you definitely shouldn’t be using. There are the hashtags that are more obviously problematic for containing profanity, terms referring to pornographic activity, or violence for example; but when ordinary hashtags are co-opted by enough bots, fake accounts, or malicious users in connection to harmful or spam content it can cause both problem and normal posts alike to be hidden and flagged to Instagram’s content moderators.
Automating an Account(s)
Another way for an otherwise innocent account to get flagged as “fake” (and possibly banned as a result) is to use a third party app to do your work for you. While there are tools that can help users schedule posts and manage content from their desktop, using tools that automate comments, likes, direct messages, or spam followers’ feeds with content will make Instagram think a legitimate account is actually a bot.
If you’re trying to grow your engagement it’s important to act natural. Varying your comments, not tagging a bunch of accounts that don’t follow you, and being smart about your likes can keep your account from getting flagged, but it can also help bring followers to you instead of driving them away, or losing them entirely if your account gets taken down.
The rules of social media are always in flux and while platforms like Instagram are taking more and better steps to keep their users safe, there are things you can do to report harassment or suspicious and/or harmful content:
- Instagram acknowledges that not all users follow the rules and despite it’s ongoing efforts, you may come into contact with harmful accounts. The first thing you can do in this situation is consult the platform’s safety guide, which will walk you through blocking and reporting users to Instagram administrators.
- Instagram has in-app reporting that lets users flag spam or otherwise harmful content and accounts.
- If you see content that is clearly illegal or are experiencing threats online there are resources you can use to contact local law enforcement.
This article was originally published on