February 28, 2023

Press Release | Supreme Court: Social Media Platforms Can Moderate Dangerous Content But Choose Not To

Though CyberWell improved the removal rate of antisemitic content on social media platforms from 21% in May to just under 24% in December of 2022, this number is far too low.

Logo for CyberWell, the first-ever open database of online antisemitism

(February 28, 2023)

In oral arguments for Supreme Court cases Gonzalez v. Google and Twitter vs. Taamneh, social media giants argued that controlling what their algorithms recommend would require wholesale suppression of even unobjectionable content – essentially, that it would be too difficult to remove truly objectionable content without casting too wide a net.

CyberWell, the world’s first live database of online antisemitism, shows how inaccurate that is. With AI, human verification, and open-source intelligence technology, CyberWell collects and vets antisemitic content across social media, classifying it according to the community standards it violates and using the internationally accepted IHRA definition to identify precise antisemitic narratives.

Trained moderators with appropriate tools can enforce existing guidelines with precision – it’s just a matter of priority.

In May 2022, CyberWell found that platforms removed just 21% of verified antisemitic content. By year’s end, after sending alerts detailing the extent of the issue and demonstrating how such content could be identified and removed, the removal rate stood at less than 24%.

Bar graph showing the rate of removal for antisemitic content on each social media platform, showing that on average 23.8% of antisemitic content was removed follow CyberWell's reporting directly to platforms.

“When there is a legal or monetary penalty involved, social media companies have allocated appropriate resources to content moderation,” said CyberWell founder and extremism expert Tal-Or Cohen Montemayor, Adv. “Responding to companies who want to protect their intellectual property, advertisers who want to protect clean brands, and regulators who want to shut down illegal activity, they have been able to deploy extensive automated and human resources to control what their sites recommend. It’s time for them to do the same for antisemitism and other forms of hate.”

Twitter recently intervened in its own algorithm to boost the reach of Elon Musk’s tweets. They can intervene to limit the scope of harmful content, but are choosing not to.

More Articles
December 7, 2023
Protected: PUBG the Jews: Call to Murder Jews Trends Online

There is no excerpt because this is a protected post.

October 20, 2023
Fighting Hitler-Inspired Jew-Hatred On the Digital Frontlines

As the war between Israel and Hamas rages on, prompting expressions of Jew-hatred both online and offline, an old antisemitic classic is back on social media: Hitler.

August 30, 2023
The 1915 Lynching of Leo Frank Still Inspires Antisemitism Today

Antisemites rallied online around the anniversary of the 1915 lynching of Leo Frank, a Jewish man today believed to be wrongfully convicted by antisemitic public pressure.

Contact Us

Fill out this form with some details or email us at [email protected]

Be in touch to request a platform demo, learn about our
work, explore partnership opportunities, offer support, or
simply to encourage our efforts. We want to hear from you!