February 28, 2023

Press Release | Supreme Court: Social Media Platforms Can Moderate Dangerous Content But Choose Not To

Though CyberWell improved the removal rate of antisemitic content on social media platforms from 21% in May to just under 24% in December of 2022, this number is far too low.

Logo for CyberWell, the first-ever open database of online antisemitism

(February 28, 2023)

In oral arguments for Supreme Court cases Gonzalez v. Google and Twitter vs. Taamneh, social media giants argued that controlling what their algorithms recommend would require wholesale suppression of even unobjectionable content – essentially, that it would be too difficult to remove truly objectionable content without casting too wide a net.

CyberWell, the world’s first live database of online antisemitism, shows how inaccurate that is. With AI, human verification, and open-source intelligence technology, CyberWell collects and vets antisemitic content across social media, classifying it according to the community standards it violates and using the internationally accepted IHRA definition to identify precise antisemitic narratives.

Trained moderators with appropriate tools can enforce existing guidelines with precision – it’s just a matter of priority.

In May 2022, CyberWell found that platforms removed just 21% of verified antisemitic content. By year’s end, after sending alerts detailing the extent of the issue and demonstrating how such content could be identified and removed, the removal rate stood at less than 24%.

Bar graph showing the rate of removal for antisemitic content on each social media platform, showing that on average 23.8% of antisemitic content was removed follow CyberWell's reporting directly to platforms.

“When there is a legal or monetary penalty involved, social media companies have allocated appropriate resources to content moderation,” said CyberWell founder and extremism expert Tal-Or Cohen Montemayor, Adv. “Responding to companies who want to protect their intellectual property, advertisers who want to protect clean brands, and regulators who want to shut down illegal activity, they have been able to deploy extensive automated and human resources to control what their sites recommend. It’s time for them to do the same for antisemitism and other forms of hate.”

Twitter recently intervened in its own algorithm to boost the reach of Elon Musk’s tweets. They can intervene to limit the scope of harmful content, but are choosing not to.

More Articles
Logo for CyberWell, the first-ever open database of online antisemitism
May 6, 2024
Press Release | Leading Social Media Monitor Says 296 Holocaust Denial Posts Over Last Year Reached 11 Million Users

“As we get further away from the horrific events of the Holocaust, the role that social media plays in ensuring that they are hosting accurate information about one of the greatest catastrophes in human history is crucial—especially as is evidenced today with online misinformation and disinformation sparking openly antisemitic demonstrations, with protestors chanting ‘gas the Jews’, and real-world violence,” said CyberWell CEO Tal-Or Cohen Montemayor.

April 18, 2024
Iran Attacks Israel, Online Antisemitism Abounds

Following the unprecedented Iranian attack on Israel, CyberWell detected a surge in targeted antisemitic rhetoric on social media.

March 28, 2024
Shaheed | Meta’s Oversight Board Loosens Policy Regulating Glorification of Violence

CyberWell warns that the recent opinion released by Meta’s Oversight Board loosening restrictions on use of the term “shaheed” may lead to harmful consequences and promote pro-terror and violent content online.

Contact Us

Fill out this form with some details or email us at [email protected]

Be in touch to request a platform demo, learn about our
work, explore partnership opportunities, offer support, or
simply to encourage our efforts. We want to hear from you!