Even in time of genocide, Big Tech silences Palestinians

Social media platforms have been systematically censoring pro-Palestinian content while allowing anti-Palestinian hate speech.

A Palestinian man uses his phone as he sits amid the rubble of destroyed buildings in Gaza City on the northern Gaza strip following weeks of Israeli bombardment, as a four-day ceasefire took effect on November 24, 2023. - The truce in the Israel-Hamas war began on November 24 and appeared to be holding, under a deal that will see hostages released in exchange for Palestinian prisoners. (Photo by Omar El-Qattaa / AFP)
A Palestinian man tries to use his phone as he sits amid the rubble of buildings destroyed by Israeli bombardment in Gaza City on November 24, 2023 [AFP/Omar El-Qattaa]

The scorching violence against the people of Gaza has been unprecedented. And so have its reverberations online. Palestinians documenting and speaking up against Israel’s genocidal war on Gaza have faced relentless censorship and repression, accompanied by an explosion of state-sponsored disinformation, hate speech and calls to violence on social media.

Following Hamas’s attack on Israel on October 7, Big Tech set to eliminate content on the war that they claimed violated their rules. TikTok removed more than 925,000 videos from the Middle East between October 7 and 31. As of November 14, X, formerly known as Twitter, had taken action on over 350,000 posts. Meta, for its part, removed or marked as disturbing more than 795,000 posts in the first three days of the attack.

This elimination frenzy, run by ill-trained algorithms and further fuelled by pressure from the EU and Israel, has resulted in the disproportionate censorship of critical Palestinian voices, including content creators, journalists, and activists reporting from the ground in Gaza.

While being accused of promoting pro-Palestinian content, TikTok actually arbitrarily and repeatedly censored content on Palestine. For example, on October 9, United States-based media outlet Mondoweiss reported that its TikTok account had been permanently banned. It was reinstated only to be suspended again a few days later. The company did not provide any explanation.

X has also been accused of suppressing pro-Palestinian voices. For example, the account of the US branch of Palestine Action group was not able to gain any new followers; the issue was only resolved after public pressure mounted.

Meta, of all companies, holds the lion’s share in this digital repression campaign. It has arbitrarily removed Palestine-related content, disrupted live streaming, restricted comments, and suspended accounts.

Among those who have been targeted is Palestinian photojournalist Motaz Azaiza, who had gained over 15 million followers on Instagram for documenting the Israeli atrocities in Gaza; his account was later reinstated. The Facebook page of Quds News Network, one of the largest Palestinian news networks with over 10 million followers, was also permanently banned.

On Instagram, people posting about Palestine have experienced shadowbanning – a stealth form of censorship where an individual is rendered invisible on the platform without being notified. Meta also reduced the threshold of certainty required for automated filters to hide hostile comments from 80 percent to 25 percent for content originating from Palestine. We have documented cases where Instagram hid comments containing the Palestinian flag emoji for being “potentially offensive”.

Meta’s content moderation has never been forgiving of Palestinian speech, especially in times of crisis. The company’s rules, developed in the aftermath of the US-led “war on terror”, have disproportionately disfavoured and silenced Arabic-language political speech. For example, an overwhelming majority of individuals and organisations on its secret “terrorist” blacklist are from the Middle East and South Asia – a reflection of the US foreign policy posture.

The company’s policy on Dangerous Organizations and Individuals (DOI) Policy, which prohibits the praise, support and representation of these individuals and groups, is the catalyst behind the company’s heavy-handed censorship and discrimination against Palestinians.

Back in 2021, this policy was responsible for silencing pro-Palestinian individuals when they took to the streets and to social media to protest Israel’s attempt to forcefully expel Palestinian families from their homes in the occupied East Jerusalem neighbourhood of Sheikh Jarrah.

In the context of the ongoing Israeli war on Gaza, Meta stated they apply their policies equally around the world and refuted claims that they were “deliberately suppressing voice”. Evidence, however, suggests otherwise.

Two weeks into Russia’s war on Ukraine, Meta bent its rules to allow Ukrainians to express themselves freely. It allowed, for instance, calls for violence against Russian invaders. It even delisted the neo-Nazi group, the Azov Battalion, designated under its DOI policy, to allow for their praise.

In defence of these exceptions, the company’s President of Global Affairs Nick Clegg wrote: “If we applied our standard content policies without any adjustments we would now be removing content from ordinary Ukrainians expressing their resistance and fury at the invading military forces, which would rightly be viewed as unacceptable.”

Have any adjustments been made to ordinary Palestinians “expressing their resistance and fury at the invading military forces”? Quite the opposite. In a blog post that was last updated on December 5, Meta stated that it has disabled hashtags, restricted live streaming, and removed seven times as many pieces of content as it did in the two months prior to October for violating its DOI policy.

Even on the humanitarian front, double standards are on full display. Meta went to great lengths to coordinate humanitarian relief for Ukrainians, including enabling a feature that helps them stay informed, locate their family members and loved ones, and access emergency services, mental health support, housing assistance and refugee aid among others.

No such support has been afforded to Palestinians in Gaza who face communications blackouts and a humanitarian catastrophe of unspeakable scale.

This discrimination transcends to how Meta dedicates its resources and enforces their policies. Arabic language content is heavily over-moderated, while Hebrew content remains under-moderated. Up until September 2023, Meta didn’t have classifiers to automatically detect and remove hate speech in Hebrew even though its platforms were used by Israelis to explicitly call for violence and to organise pogroms against Palestinians. A recent internal memo revealed they were unable to use the newly built Hebrew classifier on Instagram comments due to insufficient training data.

This is deeply worrying in light of the fact that Meta significantly relies on automated content moderation tools. Some 98 percent of Instagram’s content moderation decisions are automated and almost 94 percent are automated on Facebook. These tools have repeatedly been revealed as poorly trained in Arabic and its various dialects.

According to one internal memo leaked in the 2021 Facebook papers, Meta’s automated tools to detect terrorist content incorrectly deleted nonviolent Arabic content 77 percent of the time.

This partially explains the egregious impact we are seeing on people’s ability to exercise their rights and document human rights abuses and war crimes. It also explains some unjustifiable system glitches, including labelling Al-Aqsa Mosque, the third holiest mosque in Islam, as a terrorist organisation in 2021; translating the bios of Instagram users with a Palestinian flag to “Praise be to God, Palestinian terrorists are fighting for their freedom”; and deleting footage of dead bodies from the al-Ahli Hospital bombing for violating its policy on adult nudity and sexual activity, no less.

Meanwhile, Meta is allowing verified state accounts that belong to the Israeli government – including politicians, the Israeli army and its spokespeople – to disseminate war propaganda and disinformation that justifies war crimes and crimes against humanity including attacks on hospitals and ambulances, filmed confessions of Palestinian detainees, and almost daily “evacuation” orders for Palestinian civilians.

Instead of protecting Palestinians in Gaza as they are facing what 36 UN human rights experts and other genocide scholars have warned amounts to genocide, Meta has approved paid ads that explicitly called for a “holocaust for the Palestinians” and wiping out “Gazan women and children and the elderly”.

Such disturbing calls for violence have made their way to other platforms as well. In fact, X seems to be leading other platforms on the amount of hate speech and incitement to violence targeting Palestinians. According to Palestinian digital rights organisation 7amleh, there have been more than two million such posts on the platform since October 7.

Telegram also hosts a number of Israeli channels which openly call for genocide and celebrate the collective punishment of the Palestinian people. In one group, named “Nazi Hunters 2023”, moderators post pictures of Palestinian public figures with crosshair marks on their faces as well as their home addresses and call for their elimination.

So far, social media companies do not seem to comprehend the gravity of the situation at hand. Meta, in particular, seems to have learned very little from its role in Myanmar’s genocide of the Rohingya in 2017.

The silencing of Palestinians, while promoting disinformation and violence against them, may have been the modus operandi for social media platforms in the absence of any meaningful accountability. But this round is different. Meta is risking being implicated again in genocide and it must correct course before it is too late. The responsibility to protect users and uphold freedom of expression applies to other social media platforms, too.

The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.


Advertisement