What will social media look like on Nov. 3?

David Buzzard - media-centre.ca/Moment/Getty Images
Impact
ByAdrianne Jeffries
Updated: 
Originally Published: 

This article was originally published by The Markup, a nonprofit newsroom covering the ways technology is changing society. Follow them on Twitter and Facebook, and subscribe to their newsletter here.

Premature election results. Anecdotal reports of fraud and voter suppression. Threats of violence at the polls.

Researchers expect the amount of election-related misinformation and hate circulating online to intensify on Nov. 3 and in the following days, spurred by an extraordinarily contentious election, pandemic-related changes to the voting process, and possibly, a delay in determining a presidential winner. There are also fears of foreign interference along the lines of the Russian disinformation campaign that spread propaganda to millions of Americans using Facebook and Twitter in 2016.

The major tech platforms — already under unprecedented scrutiny from antitrust regulators—have announced policies to combat misinformation as election results trickle in. That’s likely to give rise to another theme: claims of censorship, a major topic pushed by Republican senators during a Commerce Committee hearing with the CEOs of Facebook, Google, and Twitter last week.

“There is real mistrust among the American people about whether you’re being fair or transparent,” Sen. John Thune (R-SD) said at the hearing. “And this extends the concerns about the kinds of amplification and suppression decisions your platforms may make on Election Day and during the post-election period, if the results of the election are too close to call.”

It’s impossible to predict what the online discourse will look like and how that will translate offline, said Emily Bell, director of the Tow Center for Digital Journalism at Columbia University Graduate School of Journalism, which has studied political polarization and misinformation online.

For example, this year the public has more insight into political advertising than was available in 2016 due to transparency initiatives like Facebook’s political ad library, but potentially less insight into what people are talking about in Facebook groups and on messenger apps.

“It’s simply more closed that it was four years ago,” Bell said.

However, researchers and platforms are focusing on some potential Election Day issues that have already started to percolate.

Campaigns or candidates may try to claim premature victory, Bell said, while honest errors by the media and tech platforms could fuel a sense that everything is misinformation and nothing is trustworthy.

Tech platforms are seemingly planning for what could go wrong. Their policy changes include restricting advertising as well as reducing distribution and labeling or removing certain types of information. BuzzFeed reported last week that Facebook has indefinitely stopped recommending that people join political groups on its platform—a feature of its algorithmically generated suggestions for users.

“The fact that all of the policies arrived in something of a hail in the past six months speaks to what they really are, which is public relations strategies to hold off regulation,” she said.

According to a report by the Election Integrity Partnership, a coalition of academics and researchers, Facebook (which also owns Instagram and WhatsApp) and Pinterest have the most complete plans in place, followed by YouTube (owned by Google) and Twitter. Nextdoor, Snapchat, and TikTok also had some specific election policies, according to EIP.

“The policies may not be comprehensive, but they set a standard for the platforms to hold themselves to,” said Carly Miller, a research analyst at the Stanford Internet Observatory and an author of the EIP report.

Here are the major promises Facebook, Google, Twitter, and Pinterest have made when it comes to Election Day coverage.

Misinformation on Election Results

Facebook and Instagram plan to show a notification at the top of users’ feeds that says “Votes Are Still Being Counted” and directs them to the Voting Information Center, a page with updated information about the results sourced from partners such as Reuters. If a candidate or party declares victory before major media outlets call a winner, the company says it will label the post similarly and add “more specific information in the notifications that counting is still in progress and no winner has been determined.” If a candidate is declared the winner by “major media outlets” but an opponent contests the results, Facebook and Instagram will show the “projected winner” with a link to the Voting Information Center.

Twitter has already started displaying a notification at the top of users’ timelines that says, “Election results might be delayed.” The company says it will label tweets that make false claims about election outcomes before either state election officials or at least two national news outlets announce formal results. Twitter also introduced user interaction changes on Oct. 20 that will apply through at least the end of election week. These include pushing users to add commentary when they share others’ tweets; only showing recommended tweets from people users already follow; and adding more explanation to its trending stories.

Starting on Election Day, YouTube will show a message on search results and videos that says, “Election results may not be final,” and links to Google’s election results tracker, which pulls results from The Associated Press. Google will also show its election tracker at the top of election-related search results on Google. There is no explicit policy on premature claims of victory on YouTube.

Premature election results are considered “content apparently intended to delegitimize election results on the basis of false or misleading claims” under Pinterest’s community guidelines and will be removed or limited, Pinterest spokesperson Crystal Espinosa said in an email. Pinterest will refer to Reuters and AP as its sources for election results, she said, but will not display results on its platform. “Political content is not a popular use case on Pinterest, ” Espinosa said in an email.

Undermining Legitimacy and Discouraging Voting

Facebook says it attaches a label with additional information from its fact-checking partners to Facebook and Instagram posts that claim “lawful methods of voting,” such as voting by mail, “lead to fraud.” The company also says it will remove posts that claim people will get COVID-19 if they vote in person and add a link to “authoritative information” on posts that try to discourage voting by referencing COVID-19.

Twitter will label or remove false information about voting, including “misleading claims that polling places are closed” and “misleading information about requirements for participation, including identification or citizenship requirements.”

YouTube shows information panels with links to third-party information on videos for subjects that frequently attract misinformation, such as voting by mail. Videos that “mislead voters about the time, place, means or eligibility requirements for voting, or false claims that materially discourage voting” are not allowed under its voter suppression policy.

Pinterest updated its content policies to say it will “remove or limit distribution” of misinformation that appears aimed at delegitimizing election results. “There are no exceptions to this rule — including for public figures,” according to a Sept. 3 blog post.

Political Advertising

Facebook says it will stop running “all social issue, electoral or political ads” once the polls close on Nov. 3.

Additionally, Facebook told political advertisers they had to submit all their remaining ads a week before the election. The company released a long list of restrictions on ad content, including a prohibition on ads that say “vote today” without specific context as well as ads claiming ICE is present at a poll site.

Facebook has already failed to enforce some of its policies: The Trump campaign posted new ads after the blackout period started, including some that said “vote today,” while the Biden campaign said “thousands” of its ads were wrongly blocked. Facebook published a statement about the “unanticipated issues,” attributing them to technical problems and miscommunication.

Twitter banned political advertising, including all ads from candidates and parties, in 2019.

Once polls close on Nov. 3, Google will “temporarily pause ads referencing the 2020 election, the candidates, or its outcome.” YouTube spokesperson Ivy Choi clarified in an email that this will also apply to YouTube.

Pinterest has prohibited ads for candidates, political action committees, legislation, and political merchandise since 2018. It also does not show ads against political content posted by users. “That means we won’t show ads when you search for common election-related search terms like presidential or vice-presidential candidate names, ‘polling place,’ and ‘vote,’ ” the company said in its blog post.

Intimidation and Violence

Facebook and Instagram say they will “remove calls for people to engage in poll watching when those calls use militarized language or suggest that the goal is to intimidate, exert control, or display power over election officials or voters.”

Twitter says it will remove tweets that encourage violence or incite people to interfere with voting “on and around Election night.”

YouTube says its existing violent content policy will be used to remove violent incitement around the election, such as toward poll workers.

Pinterest updated its content policy to include “threats against voting locations, census or voting personnel, voters or census participants, including intimidation of vulnerable or protected group voters or participants.”

Many Platforms Do Not Have Specific Policies for the Election

Parler, Gab, Discord, WhatsApp, Telegram, Reddit, and Twitch do not have election-related moderation policies, according to EIP.

Many types of election-related misinformation would be covered under Reddit’s existing policies, spokesperson Sandra Chu said in an email, noting that all political ads are vetted by salespeople.

The fact that all of the policies arrived in something of a hail in the past six months speaks to what they really are … public relations strategies to hold off regulation. - Emily Bell, Tow Center for Digital Journalism

Andrew Torba, Gab’s CEO, said in an email, “Gab’s election-related policies are the same as they have always been: any and all political speech that is protected by the first amendment is allowed to flow freely on Gab. Threats of violence and illegal activity is not.”

Danielle Meister, a spokesperson for WhatsApp (owned by Facebook), sent a link to a policy that includes restrictions on forwarded messages, which it calls “a potential source of misinformation,” and encourages users to message a bot run by the International Fact-Checking Network during the U.S. election.

Gabriella Raila, a spokesperson for Twitch (owned by Amazon), pointed to its community guidelines, which do not specifically mention elections, and said in an email that “we will not take action on content that contests election results” if it doesn’t violate those guidelines.

“We have internal policies against mis- and disinformation and have been vigilant for any such behavior surrounding the election,” Sean Li, head of policy, trust and safety at Discord, said in an email. He noted that there is “no way for anything to go viral” on Discord. “This means that Discord isn’t an attractive target for those seeking to spread misinformation because of the limited reach of a post,” he said.

Parler and Telegram did not respond to requests for comment.