February 28, 2023

Press Release | Supreme Court: Social Media Platforms Can Moderate Dangerous Content But Choose Not To

Though CyberWell improved the removal rate of antisemitic content on social media platforms from 21% in May to just under 24% in December of 2022, this number is far too low.

Logo for CyberWell, the first-ever open database of online antisemitism

(February 28, 2023)

In oral arguments for Supreme Court cases Gonzalez v. Google and Twitter vs. Taamneh, social media giants argued that controlling what their algorithms recommend would require wholesale suppression of even unobjectionable content – essentially, that it would be too difficult to remove truly objectionable content without casting too wide a net.

CyberWell, the world’s first live database of online antisemitism, shows how inaccurate that is. With AI, human verification, and open-source intelligence technology, CyberWell collects and vets antisemitic content across social media, classifying it according to the community standards it violates and using the internationally accepted IHRA definition to identify precise antisemitic narratives.

Trained moderators with appropriate tools can enforce existing guidelines with precision – it’s just a matter of priority.

In May 2022, CyberWell found that platforms removed just 21% of verified antisemitic content. By year’s end, after sending alerts detailing the extent of the issue and demonstrating how such content could be identified and removed, the removal rate stood at less than 24%.

Bar graph showing the rate of removal for antisemitic content on each social media platform, showing that on average 23.8% of antisemitic content was removed follow CyberWell's reporting directly to platforms.

“When there is a legal or monetary penalty involved, social media companies have allocated appropriate resources to content moderation,” said CyberWell founder and extremism expert Tal-Or Cohen Montemayor, Adv. “Responding to companies who want to protect their intellectual property, advertisers who want to protect clean brands, and regulators who want to shut down illegal activity, they have been able to deploy extensive automated and human resources to control what their sites recommend. It’s time for them to do the same for antisemitism and other forms of hate.”

Twitter recently intervened in its own algorithm to boost the reach of Elon Musk’s tweets. They can intervene to limit the scope of harmful content, but are choosing not to.

More Articles
January 9, 2025
New Orleans Terror Attack Spawns Antisemitic Conspiracy Theories

Antisemites wasted no time in blaming Jewish people for the New Orleans terror attack, turning the horrific incident into an anti-Jewish, conspiracy-fueled hate fest.

Logo for CyberWell, the first-ever open database of online antisemitism
January 7, 2025
CyberWell Statement on Meta’s Announcement to End Fact Checking, Shift to Community Notes

This is not a victory for free speech – it’s a systematic lowering of the bar that means less active enforcement from Meta against antisemitism and hate speech.

Logo for CyberWell, the first-ever open database of online antisemitism
November 15, 2024
Press Release | Election Related Antisemitic Social Media Posts Removed 30% Less Than Entire 2023

Our findings highlight how election-related discourse on social media has intensified the spread of dangerous conspiracy theories about Jews, with antisemitism emerging as a prominent feature in the political dialogue surrounding the U.S. elections on both sides of the political spectrum.

Contact Us

Fill out this form with some details or email us at [email protected]

Be in touch to request a platform demo, learn about our
work, explore partnership opportunities, offer support, or
simply to encourage our efforts. We want to hear from you!