The double standards in Facebook and Twitter’s Trump ban

Social media companies are waging a war against incitement and hate speech in the West, but what about the Global South?

Social media companies were right to de-platform Trump to stop him from inciting further violence. But the safety and freedoms of millions of people using their platforms in the Global South are also equally important, writes Mehmood [AP Photo/Jenny Kane, File]

Earlier this month, United States President Donald Trump was banned from a number of leading social media platforms for “inciting violence” after his supporters stormed the US Capitol building to prevent Congress from certifying Joe Biden’s election victory. Twitter also suspended 70,000 accounts posting QAnon content.

The US president has long been using his vast social media reach to spread misinformation, put targets on the backs of his political opponents, and intimidate his critics. As a result, Twitter and Facebook’s decision to “permanently suspend” Trump’s accounts was celebrated by many as an important, albeit late, step in the right direction. The de-platforming of the president, however, also drew attention to the very same social media companies’ conduct in other countries, especially in the Global South, where state actors and their supporters are still actively using their platforms to incite violence, spread misinformation, and intimidate dissidents.

In the US, it took a violent, deadly riot at the Capitol building to convince social media companies to finally take action against the destructive online behaviour of state officials and their supporters. What will it take for the same companies to take meaningful action against similar behaviour from state actors and their supporters elsewhere in the world? When will these companies act to protect their users living in countries with far more repressive governments and fragile democracies? What will they do to prevent  governments, powerful political and religious groups, and security forces using their platforms to spread harmful misinformation and silence dissent?

In Pakistan, for example, where mainstream media is subject to intense state censorship, social media has long been the only venue where citizens, minorities, human rights defenders and political dissidents can voice their grievances and expose the wrongdoings of their leaders.

As the state is aware of this, in 2017 it introduced the draconian Electronic Crimes Act, and in 2020 it enforced rules that give Pakistan Telecommunication Authority far-reaching powers to censor content critical of the state, government officials, and Islam. As a result, human rights defenders and journalists are being arrested and jailed for their online speech. The country’s controversial blasphemy laws are also being used to control speech on social media, with people being put on death row for comments they allegedly made on online platforms.

“Cyber armies” of trolls supportive of the state and the military, meanwhile, are staging regular, coordinated attacks against disenfranchised groups and harassing and threatening government’s critics with impunity. So far, social media companies have not taken any meaningful action to bring an end to these malicious harassment campaigns. They also do not appear eager to prevent state authorities from using their platforms to identify and target dissidents.

While the Pakistani state controls the online behaviour of most of its critics with an iron fist, destructive actions of radical groups that have political clout remain unchallenged.

One such group is Tehreek-e-Labbaik Pakistan (TLP), a far-right religious political party that calls for blasphemers to be put to death and celebrates those who have murdered the alleged perpetrators.

The party, whose members organised countless violent protests across the country in recent years, openly and repeatedly calls for the murder of any alleged blasphemer on its social media accounts. Many supporters of the hardline group also join these calls, and regularly incite violence against the persecuted Ahmadiyya community and other minorities in their posts and tweets.

The Pakistani authorities rarely take action against these social media users, perhaps because their accounts often also contain messages of loyalty and love for the state and the military.

Facebook and Twitter also did not permanently suspend the accounts of the TLP or its prominent members. Hashtags and statements by the group, calling for the execution of people accused of blasphemy, also remain visible on both platforms and continue to reach hundreds of thousands of people in Pakistan and beyond.

How can social media companies take no action against this religious political party? How can its members be allowed to use these platforms to accuse people of blasphemy, and call for their execution, in a country where religiously-motivated violence is rife and often committed with impunity?

Why are tweets and posts inciting violence against persecuted religious groups , human rights defenders, journalists, opposition members and progressives in Pakistan not being treated the same way by social media companies as Trump’s tweets instructing his supporters to “fight” for him or QAnon content deemed “potentially harmful”?

Of course, calling on social media companies to take action against such content has its own pitfalls. As it has been the case in the US, the individuals and groups who end up being de-platformed can perceive this as an attack on their freedom of speech.

Indeed, there is an important discussion to be had about who actually has the mandate to determine the limits of free speech online. But as millions across the world, from Pakistan and India to the US, are facing immediate threats due to content published freely on social media platforms everyday, social media companies have a responsibility to take action now, without further delay.

On paper, these platforms already have rules and regulations that ban harassment, incitement to violence, hate speech, and spread of harmful misinformation.

The mechanisms that allow for these regulations to be implemented, however, are not working effectively, especially in countries like Pakistan.

Human rights defenders, activists, researchers, journalists and members of persecuted communities who regularly face harassment and threats on online platforms often find themselves unable to get tweets and posts that clearly violate platform guidelines removed.

As Facebook and Twitter rarely make the effort to find and remove harmful speech, people and groups who are at the receiving end of this abuse end up being forced to track and report the offending posts themselves. While they achieve some success, often with the help of independent groups specialising in such work, they do not have the capacity to monitor and report all such content by themselves. Moreover, these companies rarely respond to reports of abuse promptly, and at times completely ignore complaints.

In September 2018, a UN fact-finding mission on Myanmar determined that Facebook “has been a useful instrument for those seeking to spread hate” in the country. A few months later, Facebook admitted that its failure to track and remove harmful content about the Rohingya played a significant role in the “offline violence” the long-persecuted minority community faced in Myanmar in 2017. Facebook’s admission raised hopes that social media companies will do more to fight incitement in the Global South. So far, however, their efforts have been limited to swiftly removing posts that the state authorities want removed.

Both Facebook and Twitter have the necessary resources to rid their platforms of harmful content in every country in the world, if only they wanted to. They can build region-specific teams consisting of people who speak local languages and understand the domestic sociopolitical dynamics and task them with reviewing and removing harmful content rapidly. In addition to removing content that violates the platforms’ guidelines through human review and technology, they can also block certain keywords from appearing in trends and searches to stop harmful content from spreading online and spilling into offline spaces, and vice versa. Twitter has already used this method to limit the reach of QAnon content in the US. In fact, the company said in a blog post published on January 12 that it would “update tools” as “terminology [of hate and abuse]” and “behaviours evolve.”

Social media companies were right to de-platform Trump and alt-right users to stop them from inciting further violence. But the safety and freedoms of millions of people using their platforms in the Global South are also equally important.

Unless Facebook and Twitter move to take similar actions against those posting harmful and violent content that violate their guidelines and policies in other countries, they cannot avoid being accused of hypocrisy. If they do not act, and act fast, they will be complicit in the crimes committed by hate mongers they happily gave a platform to.

The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.