A series of inflammatory advertisements submitted to Facebook calling for violence and genocide against Palestinians were approved by the platform, the Intercept has revealed.
The ads brazenly violated Facebook’s policies, containing explicit calls for “wiping out Gazan women and children” and demanding a “Holocaust for the Palestinians”, yet they passed the platform’s machine-based moderation filters.
“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told the Intercept. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”
Ads submitted in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. Some contained violent content directly calling for the murder of Palestinian civilians. The idea to test Facebook’s automated content filtering system came about when Nashif saw an advertisement on Facebook directly calling for the murder of Palestinian rights activist Paul Larudee.
The sponsored post had passed through and been approved by Facebook’s machine learning tools that supposedly moderate harmful content. Though the ad was removed following a complaint, the question over how posts calling for assassination, which is a violation of Facebook’s own rules, was permitted on the platform.
According to the Intercept, ads calling for the murder of Larudee were sponsored by Ad Kan, an Israeli right-wing group started by former Israeli military and intelligence personnel. According to its website, Ad Kan aims to target “anti-Israeli organisations.”
Last year, an external audit found Facebook had no algorithms to detect violent Hebrew content against Arabs. Despite subsequent assurances of improvements, these new revelations indicate otherwise. The speculation is that either Facebook’s much-touted AI tools intended to curb hate speech do not work, or they aren’t being utilised when pro-Israelis are calling for the murder and genocide of Palestinians.
“We knew from the example of what happened to the Rohingya in Myanmar that Meta has a track record of not doing enough to protect marginalised communities,” Nashif said, “and that their ads manager system was particularly vulnerable.”
Meanwhile, Facebook has aggressively censored Arabic content based on the merest suspicion of policy violations. The discrepancies between policing Arabic and Hebrew speech have raised troubling questions about Facebook’s impartiality and anti-Palestinian bias. Facebook spokesperson Erin McPike claimed that the ads had been approved accidentally. “Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes,” she said. “That’s why ads can be reviewed multiple times, including once they go live.”
For Palestinians, this amounts to the latest evidence that the world’s dominant social platform selectively applies rules when it comes to protecting their lives and dignity. And in an environment charged with ethnic hatred, the real-world implications for such double standards can prove deadly, as seen in Myanmar where Facebook posts are said to have played a role in the genocide of Rohingya Muslims.