BSR’s audit finds Facebook hurt Palestinians in the Israel-Gaza war.

An independent audit of Meta’s handling of online content during the two-week war between Israel and Palestinian militant group Hamas last year found that the social media giant had denied Palestinian users their freedom of expression by mistakenly removing their content and punishing Arabic-speaking users more heavily than Hebrew-speaking ones.

The report from the consulting firm Business for Social Responsibility is another indictment of the company’s ability to preside over its global public square and balance freedom of expression with potential harm in a tense international environment. It also represents one of the first insider accounts of the failures of a social platform during the war. And it reinforces complaints from Palestinian activists that online censorship has fallen more heavily on them, as reported by the Washington Post and other media outlets at the time.

“The BSR report confirms that Meta’s censorship violated the #Palestinian right to freedom of expression among other human rights through its greater excessive application of Arabic content than Hebrew, which was largely under-moderate “, 7amleh, the Arab Center for the Advancement of Social Media, a group that supports Palestinian digital rights, he said in a Twitter statement.

May 2021 was initially sparked by a conflict over an upcoming Israeli Supreme Court case regarding settlers having the right to evict Palestinian families from their homes in a disputed neighborhood in Jerusalem. During tense protests over the court case, Israeli police stormed the Al Aqsa mosque, one of Islam’s holiest sites. Hamas, which rules Gaza, responded by firing rockets at Israel and Israel reacted with an 11-day bombing campaign that killed more than 200 Palestinians. Over a dozen people in Israel were killed even before both sides called a ceasefire.

During the war, Facebook and other social platforms were lauded for their pivotal role in sharing fast-moving conflict narratives first-person, on the ground. Palestinians posted photos of houses covered in rubble and coffins of children during the barrage, sparking a global outcry to end the conflict.

But issues with content moderation also surfaced almost immediately. At the start of the protests, Instagram, which is owned by Meta along with WhatsApp and Facebook, began restricting content containing the hashtag #AlAqsa. At the beginning the company blamed the problem of an automated software distribution failure. After The Post published a story highlighting the problem, a Meta spokesperson also added that a “human error” had caused the glitch, but offered no further information.

The BSR report sheds new light on the incident. The report states that the hashtag #AlAqsa was mistakenly added to a list of terms associated with terrorism by an employee working for a third-party contractor who is in charge of content moderation for the company. The employee mistakenly extracted “from an updated list of US Treasury Department terms containing the Al Aqsa Brigade, resulting in #AlAqsa being hidden from search results,” the report found. The Al Aqsa Brigade is a well-known terrorist group (BuzzFeed News reported internal discussions about mislabeling terrorism at the time).

As violence in Israel and Gaza unfolds on social media, activists raise concerns about interference from tech companies

The report, which only investigated the period around the 2021 war and its immediate aftermath, confirms years of reports from Palestinian journalists and activists that Facebook and Instagram seem to censor their posts more often than Hebrew-speaking ones. BSR found, for example, that after adjusting for the population difference between Hebrew and Arabic-speaking people in Israel and the Palestinian territories, Facebook was removing or adding strikes to more Palestinian posts than Israelis. Internal data reviewed by BSR also showed that the software regularly reported potentially infringing content in Arabic at higher rates than Hebrew content.

The report found that this was likely due to the fact that Meta’s AI-based hate speech systems use lists of terms associated with foreign terrorist organizations, many of which are groups from the region. Therefore, a person posting messages in Arabic would be more likely to see their content marked as potentially associated with a terrorist group.

Furthermore, the report claimed that Meta had created such detection software to proactively identify hatred and hostile speech in Arabic, but had not done so for the Hebrew language.

The report also suggests that, due to the shortage of content moderators in both Arabic and Hebrew, the company was targeting potentially infringing content to reviewers who do not speak or understand the language, particularly Arabic dialects. This resulted in further errors.

The report, commissioned by Facebook on the recommendation of its independent supervisory board, issued 21 recommendations to the company. These include changing its policies on identifying dangerous organizations and individuals, providing greater transparency to users when posts are penalized, reallocating Hebrew and Arabic content moderation resources based on “market composition” and targeting potential violations of Arabic content to people who speak the same Arabic dialect as the one in the social media post.

In a reply. Miranda Sissons, Meta’s director of human rights, said the company will fully implement 10 recommendations and will be partially implementing four. The company was “evaluating the viability” of six others and was not taking “further action” on one.

“There are no quick and overnight fixes for many of these recommendations, as BSR makes clear,” Sissons said. “While we have already made significant changes as a result of this exercise, this process will take time, including time to understand how some of these recommendations can best be addressed and whether they are technically feasible.”

How Facebook neglected the rest of the world, fueling hate speech and violence in India

In its statement, the Arab Center for Social Media Advancement (7amleh) said the report wrongly defined Meta bias as unintended.

“We believe the censorship has continued for years [Palestinian] rumors, despite our reports and arguments about this bias, confirm that this is deliberate censorship unless Meta undertakes to end it, “he said.