In the midst of escalating conflict in the Middle East, X is failing to moderate hate speech on its platform that promotes antisemitic conspiracies, praises Hitler and dehumanizes Muslims and Palestinians.
In new research, the Center for Countering Digital Hate (CCDH), a nonprofit that researches online hate and extremism, collected a sample of 200 X posts across 101 accounts that featured hate speech. Each post was reported on the platform on October 31 using X’s reporting tools and either “directly addressed the ongoing conflict, or appeared to be informed by it.”
That tool invites users to flag content and provide information on what category of behavior it falls into, including an option for hate speech. That reporting options includes “Slurs, Racist or sexist stereotypes, Dehumanization, Incitement of fear or discrimination, Hateful references, Hateful symbols & logos.”
According to the CCDH, 196 of the 200 posts remain online, while one account was suspended after being reported and two were “locked.” A sample of the posts reviewed by TechCrunch show that X continued to host content that depicted antisemitic caricatures, called Palestinians “animals” and invited others to “enjoy the show of jews and muslims killing each other.”
All example X posts reviewed by TechCrunch remained online at the time of writing. Of the 101 accounts represented across the sample posts, 82 were paid verified accounts with a blue check.
View counts on the X posts varied, but some were viewed over 100,000 times, including posts denying the Holocaust, and one interactive gif depicting a man in a yarmulke being choked, which was viewed nearly one million times. The posts that were not removed collected more than 24 million views in total.
While a sample of 200 posts only represents a fraction of the content on X at any given time, many of the posts are notable for their flagrant racism, open embrace of violence and for the fact that they remain online, even now. Social media companies regularly fail to remove swaths of content that violates their rules, but they generally remove those posts very quickly when researchers or journalists highlight them.
Of the sample posts included in the CCDH report, some are now affixed with a label that says “Visibility limited: this Post may violate X’s rules against Hateful Conduct.” Other content, including posts promoting antisemitic conspiracies, jokingly dismissing the Holocaust and using dehumanizing language to normalize violence against Muslims remain online without a label. TechCrunch reached out to X about why the company took no action against the majority of the posts, which were reported two weeks ago, but received the automated reply “Busy now, please check back later.”
“X has sought to reassure advertisers and the public that they have a handle on hate speech – but our research indicates that these are nothing but empty words,” Center for Countering Digital Hate CEO Imran Ahmed said. “Our ‘mystery shopper’ test of X’s content moderation systems – to see whether they have the capacity or will to take down 200 instances of clear, unambiguous hate speech – reveals that hate actors appear to have free rein to post viciously antisemitic and hateful rhetoric on Elon Musk’s platform.”
In its safety guidelines, X states that users “may not attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” Under Elon Musk’s leadership, the company formerly known as Twitter has reduced its content moderation workforce, rolled back safety policies protecting marginalized groups and invited waves of previously banned users back to the platform.
This year, X filed a lawsuit against the CCDH, alleging that the nonprofit used data on the platform without authorization and intentionally undermines the company’s advertising business. The CCDH maintains that X is wielding legal threats to silence its research, which has factored heavily into a number of reports on X’s lax content moderation under Elon Musk.
The same day that the CCDH released its new report, X published a blog post touting its content moderation systems during the ongoing conflict in Israel and Gaza. The company says that it has taken action on over 325,000 pieces of content that violate its Terms of Service and those actions can include “restricting the reach of a post, removing the post or account suspension.”
“In times of uncertainty such as the Israel-Hamas conflict, our responsibility to protect the public conversation is magnified,” X’s Safety team wrote.
Source link