Facebook's new privacy push isn't nearly enough to fix the site's problems


In late April, news broke that Facebook must pay up to $5 billion in fines, as a result of an FTC investigation into the company’s massive data privacy violations during 2017's Cambridge Analytica scandal. Now, as part of its settlement negotiations, the New York Times reports that Facebook is apparently considering creating a privacy committee to evaluate how to best protect users' data, in addition to bringing on an "external assessor" appointed by both Facebook and the FTC who would further consult on the social network's ongoing privacy issues.

The report goes along with Facebook's broader, and much-needed, re-commitment to user privacy. At the company’s F8 developer conference in San Diego on April 30, CEO Mark Zuckerberg spoke under a stage backdrop that read, fittingly, “The future is Private." He revealed that the platform will be launching a full website redesign, focusing more on private messaging between friends, and encouraging users to interact in closed groups on its Messenger app as opposed to in their normal feeds. And over the last two years, Facebook has run multiple apology ads and targeted marketing campaigns aiming to ease minds about its data-handling behind the scenes.

Facebook Newsroom

Whether all these promises and changes will actually have a substantial effect on how the platform handles users' privacy, though, is unlikely — especially when it comes to the encouragment to use Messenger. Released as a stand-alone app in 2011, Messenger has a complicated privacy past; almost immediately after its launch, alarm bells went off across the internet as users began digging into the app’s terms and conditions. As part of setting up Messenger, users had to, among other things, allow the app to collect "loads of data on you, including your call and text history from outside of Messenger," according to Business Insider.

After news of this forced agreement was widely publicized years later, Facebook responded to complaints, saying it had updated its practices to make the data collection an “opt-in” feature. However, the site still justified its scanning of users’ content as a means of detecting malware, and maintained that keeping private messages unencrypted (by default) was similarly done for their own protection. In light of recent privacy complaints, though, Facebook executives at this year’s F8 conference did announce multiple upgrades to boost Messenger’s encryption, and reduce and secure the amount of data stored.

Still, using the vast quantities of data generated by its users across its apps and platforms to sell targeted ads is the foundation of Facebook’s business model. And while we may never know for sure if our phones are always listening to us or just how much our Facebook posts and likes affects the ads we get, major platforms do collect enough info from our browser history, location data, social media likes, and other ad clicks to connect the dots. If you’re Facebook, that data gets even more powerful when you can subtly get users to agree to hand over access to their contacts, private messages, photos, and more.

Shutterstock - fizkes

And now, despite the privacy upgrades, the platform is about to get a lot more aggressive about integrating its business partners into features like Messenger. The app already is connected to the business-oriented Facebook Marketplace, but now, by creating additional features where, as said during F8, “...businesses can easily create an ad that drives people to a simple Q&A in Messenger to learn more about their customers," things are taking a darker turn. Don't forget that back in December 2018, a New York Times investigation revealed that Facebook had been sharing users’ private messaging data with Amazon, Microsoft, Netflix, and other business partners for marketing purposes without their knowledge or consent.

Then there are the problems with Facebook's private groups, which the platform is now encouraging users to participate in more frequently. Currently, over 400 million users belong to at least one of the site's tens of millions of active groups. While many of these groups are harmless, some support white supremacist, sexual predator, and even terrorist communities, and a few have over a million members. Many social media disinformation experts are concerned that Facebook's push to get users to interact more in closed groups as opposed to “out in the open" will make it even more difficult to keep harmful accounts from abusing the platform going forward.

As concern grows over the increasing amount of dangerous or outright fake content on Facebook, the company has stepped up efforts to purge disturbing accounts and groups, announcing recently that it is banning a number of Anti-Semitic, anti-vaccination, and alt-right figures including Alex Jones, Louis Farrakhan, and Laura Loomer from its platforms. That's certainly progress, but if Facebook is serious about maintaining users' safety and privacy, it needs to take more concrete steps that address how it intends to keep data private and the site clear of harmful accounts.