Search
Search

IN THE PRESS

The Guardian: Meta's disparity in moderating Hebrew vs. Arabic content

A 2022 independent analysis commissioned by Meta in 2022 concluded that its moderation system "penalized Arabic more often than Hebrew," even when considering the disparity in the number of speakers.

The Guardian: Meta's disparity in moderating Hebrew vs. Arabic content

The Meta company logo on a smartphone, March 25, 2024. (Credit: Sebastien Bozon/AFP)

Meta — the company behind platforms like Facebook, Instagram and WhatsApp — is struggling to moderate content related to the war in Gaza, particularly those written Hebrew, according to a report by The Guardian.

An anonymous employee has claimed that Meta's policies on hate speech related to Palestine are unfair. This assessment echoed by Palestinian rights advocates since early days of the war in Gaza that began last October. The report, published Thursday, states that Meta has not applied the same processes to evaluate the accuracy of content moderation and hate speech in Hebrew compared to Arabic.

“Hebrew, spoken by approximately 10 million people, makes up a much smaller fraction of posts on Meta’s social networks than Arabic, spoken by about 400 million people. Critics say due to the ongoing war, more attention is needed to Hebrew-language content, and Meta has faced questions around its moderation of posts in Hebrew before.” the newspaper states, arguing that "Meta’s review of hate speech policy sparks concern of further censorship of pro-Palestinian content." 

The Guardian also notes that an independent analysis commissioned by Meta in 2022 concluded that its moderation system "penalized Arabic more often than Hebrew," even when considering the disparity in the number of speakers, during the peak of tensions between Israel and the Palestinians in 2021. The analysis revealed that Meta’s systems flag Arabic content at a “significantly higher” rate than Hebrew content. The report suggests that Meta’s policies “may have resulted in unintended bias by excessively amplifying Arabic content compared to Hebrew content,” according to The Guardian.

Some employees directly involved in moderating content are also hesitant to voice their concerns for fear of retaliation, as detailed in a recent letter signed by more than 200 Meta workers. According to a former employee, these conditions suggest that the company's priorities are "not about actually making sure content is safe for the community."

“When Palestinian voices are silenced on Meta platforms, it has a very direct consequence on Palestinian lives,” Cat Knarr of the U.S. Campaign for Palestinian Rights told The Guardian. “People don’t hear about what’s happening in Palestine, but they do hear propaganda that dehumanizes Palestinians. The consequences are very dangerous and very real.”

Read also:

Two more Palestinian men report Israeli army forced them onto jeep

Knarr, alongside a coalition of 48 civil society organizations and number of prominent Palestinians, sent a letter to Meta accusing it of "aiding and abetting governments in genocide” through its content moderation policies.

Finally, the Guardian cites Nadim Nashif, founder and director of 7amleh — a non-profit organization that advocates Palestinian and Arab human rights, who says that Meta's policy highlights the company's lack of "nuanced or accurate understanding of the region."

Meta — the company behind platforms like Facebook, Instagram and WhatsApp — is struggling to moderate content related to the war in Gaza, particularly those written Hebrew, according to a report by The Guardian.An anonymous employee has claimed that Meta's policies on hate speech related to Palestine are unfair. This assessment echoed by Palestinian rights advocates since early days of the...