CyberWell is the first ever data platform of online antisemitic content, collected, vetted and curated with the goal of driving enforcement and improvement of community standards and hate speech policy across the digital space. CyberWell was created to leverage data to hold social media platforms accountable for the antisemitism they host and fight for a safer digital future for all. By showcasing antisemitic content on an open platform, we are democratizing the raw data of online antisemitism so that non-profits, digital rights researchers, lawmakers, educators and journalists can propel their own activism, policy-making and research forward.
Recently the Adopt IHRA Coalition - a group of over 180 organizations that previously penned an open letter to Facebook urging the adoption of the International Holocaust Remembrance Alliance’s IHRA working definition on antisemitism as a cornerstone of their community standards – utilized CyberWell’s database to highlight the state of antisemitism on Twitter and similarly call for the platform’s full adoption of the working definition as part of their hate speech policy. CyberWell uses the widely supported working definition to categorize offending content into specific types of antisemitism, and we too support the call for Twitter to adopt the definition as part of its hate speech policy.
But, what does that mean practically? How do you translate the IHRA working definition into hate speech compliance on a major social media platform? As an addendum to the Adopt IHRA Coalition’s efforts, we put together this brief methodology and data insights report to shed more light on the details of what CyberWell’s initial monitoring of Twitter revealed about the state of antisemitism on the platform and the overarching hate speech policy gaps.
As a regular part of our working methodology, CyberWell uses a combination of broad search keywords (comprised of general terms, slang, and code words for Jew-hatred), relevant images, and videos to identify a pool of content that is likely to include antisemitic material. We then apply our own exclusive specialized dictionary of antisemitic terms and expressions, based on the IHRA working definition, to flag highly likely antisemitic content. Each piece of content is reviewed by at least two professional analysts who are trained in the fields of antisemitism, linguistics and digital policy. All confirmed antisemitic content is reported to the relevant social media platform, in this case Twitter, and monitored to track how long it takes the platform to remove the content.
The specific dataset cited in the call for Twitter to adopt the IHRA working definition examined antisemitic content published on Twitter dating back to January 2020 through September 2022 in English and Arabic. For the relevant test period, the specialized dictionary flagged nearly 40,000 Tweets that had a high likelihood of being consistent with one of the eleven criteria of Jew- hatred laid out in the working definition. CyberWell’s analysts reviewed a sample of 2,810 Tweets over the test period and confirmed 1,079 antisemitic Tweets. Flagged antisemitic Tweets were also reviewed to determine whether they complied with Twitter’s Rules & Policies, and if not, which specific policy section was violated. This innovative process is unique to CyberWell, combining antisemitism monitoring with compliance to bridge the gap between digital policy and platform-implementation rules by generating data on specific policy failures.
CyberWell’s technology further analyzed the confirmed antisemitic Tweets. Documenting the geo- location of the Tweet,1 the repeating themes that go against general digital hate speech policy best practice (“Policy Violation Themes”),2 and giving each vetted Tweet an engagement and potential reach score. The engagement score is an estimated interaction grade given to each antisemitic Tweet, based on retweets, likes and comments. All antisemitic Tweets also received a potential reach score which is calculated based on the number of followers of the publishing account.
As of the writing of this report, since CyberWell began monitoring for online antisemitism in May 2022, there a total of 1,898 antisemitic Tweets that were collected, vetted and reported to Twitter. While this sample may be small, it is indicative of our capacity as a start-up non-profit supported by philanthropic donations – CyberWell’s technology and bundle of open-source intelligence tools, coupled with our IHRA specialized lexicon have the capacity to detect tens of thousands of pieces of antisemitic content. The full Twitter dataset from the relevant testing period is available here.
Platform: Twitter Test period: January 2020 – September 2022

Engagement Score and Potential Reach

Top Violated Twitter Rule: Hateful Conduct

Twitter’s Hateful Conduct policy is consistent with a larger theme that is recognized & prohibited by most social media platforms’ hate speech policy, Dehumanizing & Stereotypical Hate Content - Content that attacks or dehumanizes individuals or groups based on their protected characteristics. Prohibited content in this category includes speech or imagery in the form of generalizations or comparison to unqualified behavioral statements and/or stereotypes (i.e. tropes). Digital platforms have policies against this content because it can embrace a hateful ideology, inspire hatred or fear of said groups and even incite violent acts against that group or its members. Some platforms, including Twitter, address Holocaust denial through this community standard.
Top forms of Jew-hatred in the dataset based on the working definition. It is worth noting, an antisemitic Tweet can be consistent with one or more forms of antisemitism as described in the working definition.

Criterion of the IHRA working definition: Type 2
“Making mendacious, dehumanizing, demonizing, or stereotypical allegations about Jews as such or the power of Jews as collective — such as, especially but not exclusively, the myth about a world Jewish conspiracy or of Jews controlling the media, economy, government or other societal institutions.”

Criterion of the IHRA working definition: Type 3 “Accusing Jews as a people of being responsible for real or imagined wrongdoing committed by a single Jewish person or group, or even for acts committed by non-Jews.”

Criterion of the IHRA working definition: Type 9
“Using the symbols and images associated with classic antisemitism (e.g., claims of Jews killing Jesus or blood libel) to characterize Israel or Israelis.”







© 2025 CyberWell. All Rights Reserved.
Powered by Webstick