Denial and Conspiratorial Self-Victimization in Antisemitic Discourse: Analysis of the Online Aftermath of Violent Attacks on Jews and Israelis

CyberWell is tracking a dangerous rise in online narratives that deny attacks on Jews and Israelis or claim they were “staged,” with our dataset amassing 14 million views and an average removal rate of only 25% across major social media platforms.
Total 14 pages

Executive Summary

Page 1

Executive Summary

  • Between November 7, 2024, and August 23, 2025, CyberWell conducted a crossplatform analysis of online social media content featuring antisemitic denial and conspiratorial self-victimization following violent attacks against Jews and Israelis from across the world. The analysis focused on content that erased Jewish victimhood, denied atrocities, or falsely claimed that Jews or Israelis staged attacks against themselves. This is a worrying phenomenon rooted in Holocaust hate speech that has resurged since October 7 and repeated itself after each of the 10 subsequent violent events examined in the dataset. These online narratives are especially dangerous because they perpetuate a cycle of violence – both fetishizing the harmful nature of the events and calling for further violence.
  • These trends in online antisemitism are emergent from the last two years of record-breaking antisemitism, and therefore, are not covered in the IHRA (International Holocaust Remembrance Alliance) Working Definition’s 11 examples of antisemitism. The IHRA examples address denial and conspiratorial selfvictimization only in the context of the Holocaust and Holocaust hate speech and have not been updated since it’s adoption in 2016. To address this gap, CyberWell mapped four additional antisemitic narrative categories of present-day antisemitism: (1) Denial of Violent Events Against Jews, (2) Denial of Violent Events Against Israelis, (3) Conspiratorial Self-Victimization Against Jews, and (4) Conspiratorial Self-Victimization Against Israelis. Of these, the third category – claims that Jews staged or carried out violence against themselves – is the most prominent in the dataset, appearing in 88% of posts.
  • This report analyzes 308 verified pieces of antisemitic content collected from Meta (Facebook and Instagram), TikTok, X, and YouTube, each containing denial and/or conspiratorial self-victimization related antisemitism. CyberWell analyzed and verified this content as antisemitic according to the IHRA definition, as well as CyberWell’s four additional categories addressing denial and conspiratorial selfvictimization.
  • Together, these posts amassed nearly 14 million views and generated more than 500,000 user interactions. This makes the examined dataset the second mostviewed of CyberWell’s reports, following its earlier study on denial and conspiratorial self-victimization related to the October 7 massacre, which examined online discourse from the first three months following Hamas’ mass terror attack against Israeli civilians. These findings underscore that denial and conspiratorial self-victimization remain among the most prominent forms of antisemitism online.
  • Although U.S.-linked incidents made up just 35.4% of the dataset, they accounted for 80.3% of the total views and 56% of the engagement, reflecting the disproportionate traction antisemitic content gains around U.S. events. This pattern aligns with CyberWell’s assessment that antisemitic influencers with large audiences are more likely to deny the occurrences of antisemitic attacks in the U.S. or claim that Jews orchestrated attacks against themselves in a U.S. context.
  • X alone accounted for most of the dataset at over 50%, and showed the highest engagement, with posts on X representing 85.5% of total views and 64.3% of total interactions. This highlights X’s disproportionate role in the spread of these narratives across social media.
  • Enforcement across platforms against these new antisemitic narratives remained limited. Overall, only 25.4% of policy-violating posts were removed. Platform-specific removal rates were similarly low: TikTok 34.9%, X 25%, YouTube 20%, Meta 21.6% – all significantly below the average 50% removal rate of escalated antisemitism documented in CyberWell’s 2024 Annual Report.
  • Nearly 47.4% of all posts centered on the October 7 massacre, which remains the most frequent reference point in denial and conspiratorial self-victimization narratives more than two years after the attack. Enforcement was especially weak, with an overall removal rate of just 17.8%, and 25.7% for denial specifically – both notably lower than the 36.4% removal rate documented in previous monitoring periods. This persistence mirrors the patterns long observed in Holocaust denial. The recurrence of these narratives in both contexts underscores that there should be no distinction between denying the Holocaust and denying other acts of antisemitic violence for the purposes of trust and safety and enforcement of digital policy. Despite stronger action on Holocaust-related hate speech, October 7 denial and conspiratorial self-victimization content remain severely under-enforced.
  • Despite formal policies against violent-event denial, enforcement by platforms tends to address only outright event denial. More subtle forms – such as denial of specific elements (e.g., sexual assaults on October 7), narrow “major event” thresholds, or “questioning” tactics framed as skepticism – largely evade enforcement action from the platforms. Moreover, aside from TikTok’s policy explicitly prohibiting it, conspiratorial self-victimization remains unaddressed by other mainstream social media platforms outside the context of the Holocaust. As a result, removal rates remain low, and these narratives persist despite targeting Jews and Israelis as protected groups.
  • The report concludes with targeted recommendations for improving platform policies. These include explicitly prohibiting all forms of denial of well-documented violent events and conspiratorial self-victimization related to antisemitic attacks; closing the loopholes that allow “questioning” or minimizing such events; ensuring that all well-documented incidents are covered rather than only “mass-casualty” events; and developing stronger automated detection tools to identify and flag such content.

Introduction

Page 2

Introduction

Antisemitism is fueling a rise in violent attacks against Jews and Israelis across continents in different communities. These attacks, range from shootings targeting Jewish victims, to the vandalism of synagogues, to physical assaults against Jewish and Israeli individuals. While there is open condemnation of these acts by some, others take these violent incidents as an opportunity to spread hatred further—particularly online, where some users not only actively justify the attacks against various Jewish communities from around the world, but even call for additional violence and more victims. A more recent and troubling trend in online antisemitic discourse denies that such attacks even occurred or conspiratorially accuses the Jewish community of staging these acts against themselves.

Denial manifests as the outright rejection of an antisemitic event, its dismissal as a hoax, or the minimization of its severity. Conspiratorial self-victimization extends this dynamic further, advancing claims that Jews or Israelis staged attacks to elicit sympathy, achieve political or financial gain, or manipulate public opinion – and, in some cases, alleging that they are even willing to “sacrifice” or kill members of their own community to do so. In some cases, even when a perpetrator is clearly identified, claims circulate that the assailant was Jewish. This absolves others of responsibility and redirects blame to vilify the victim. In the aftermath of antisemitic attacks, these narratives spread quickly and deepen the cycle of hatred.

From November 7, 2024 — the eve of the Amsterdam pogrom against Israeli soccer fans — to August 28, 2025, CyberWell monitored online discourse following a series of several major antisemitic attacks (outlined comprehensively in this report beginning on page 19). These included the October 7, 2023, massacre in Israel, which is still the target of denial and conspiratorial self-victimization narratives to this day. Monitoring also covered other violent incidents, such as the shooting of two Israeli Embassy staffers in Washington D.C. on May 21, 2025, and a series of arson attacks targeting synagogues in Australia in December 2024 and July 2025. Across these and other incidents, such narratives proliferated rapidly online, gaining significant visibility and engagement.

These narratives are not new creations of the digital age but rooted in longstanding antisemitic behaviors. Classic scapegoating cast Jews as responsible for external crises. In antiquity, this appeared in the accusation of deicide, claiming “the Jews killed Jesus”. In the Middle Ages, it reemerged in false charges of well-poisoning during the Black Death. These accusations portrayed Jews as the hidden cause of society’s greatest tragedies, a theme that persists in conspiracy theories today. Over time, scapegoating evolved into an even more insidious form: blaming Jews not only for global disasters, but even for their own tragedies.

Post-Holocaust and Holocaust revisionism introduced this new dimension. In the decades following the genocide of the Jewish population of Europe, networks of far-right activists, neo-Nazi groups, and self-styled “revisionist” authors advanced narratives that distorted the genocide by suggesting that the Jews themselves bore responsibility for the Holocaust or that their suffering had been exaggerated for political gain. These forms of distortion – including the minimization of victim numbers, the shifting of perpetrator guilt, and the inversion of responsibility – served to weaken the moral clarity and historical specificity of the Holocaust.[1]

At the same time, the post-war era also witnessed the rise of Holocaust denial, which asserted that the genocide itself did not occur. Crucially, as scholars have shown, the primary function of Holocaust denial was not to justify contemporary attacks against Jews but to absolve perpetrators and ideological sympathizers of blame, preserve national self-image, and safeguard pre-existing antisemitic worldviews.[2]

This contrasts sharply with today’s environment, where denial and conspiratorial self-victimization are increasingly deployed not only to erase past crimes but also to rationalize or even encourage new antisemitic violence. These narratives are now applied in real time to violent attacks against Jews and Israelis. The October 7 massacre has become a striking example: even in the face of overwhelming evidence – including video recordings livestreams by the terror group, Hamas, as well as multiple investigations by human rights groups and governments – antisemitic discourse has denied that the attack occurred, minimized its brutality, or shifted blame onto Jews and Israelis themselves. These contemporary patterns echo older forms of antisemitism, but with a dangerous new orientation – denial and conspiratorial self-victimization are now being weaponized to justify ongoing hostility rather than merely to reinterpret history.

In this way, these narratives operate as a recurring cycle: Antisemitism is spread online; violence against Jews is circulated as entertainment; the attacks are subsequently denied or blamed on the victims; and finally, calls for further violence emerge. This cycle makes denial and conspiratorial self-victimization among the most dangerous contemporary antisemitic strategies.

This report examines how these newly emergent narratives of denial and conspiratorial self-victimization of violent events function within current antisemitic discourse. It first explains why this topic – which gained prominence after the October 7 massacre – has received little academic attention to date, and how these narratives fall outside the eleven examples of the IHRA Working Definition of Antisemitism. To address this gap, CyberWell introduces four additional categories that capture these narratives, recognizing them as dangerous drivers in a cycle that fetishizes the precipitating event and incites further violence.

The report then details the dataset, key insights, and cross-platform findings, highlighting significantly low removal rates for denial and conspiratorial self-victimization and showing how these outcomes track with persistent policy and enforcement gaps on major platforms.

CyberWell’s Mission

Page 3

CyberWell’s Mission

CyberWell is a non-profit organization dedicated to eradicating online antisemitism by driving the improvement and enforcement of community standards across social media platforms. Using data-driven analysis, CyberWell identifies where such policies are inconsistently applied or fail to protect Jewish users from harassment and hate. CyberWell’s unique methodology consists of identifying antisemitic keywords, applying a specialized dictionary grounded in the International Holocaust Remembrance Alliance’s (IHRA) definition of antisemitism, expanded to include emergent antisemitic tropes not explicitly covered in the working definition, alongside systematic human review. Each item of content is evaluated by trained analysts with expertise in antisemitism, linguistics, and digital policy to determine both its alignment with antisemitic frameworks and its potential violation of platform rules. Further details on the methodology are available in our policy guidelines.

CyberWell currently monitors Facebook, Instagram, X (formerly Twitter), TikTok, and YouTube in both English and Arabic. We serve as trusted flaggers for Meta (Facebook, Instagram, & Threads), TikTok, and YouTube, enabling us to escalate policy-violating content and advise content moderation teams directly. In May 2022, as part of our strategy to democratize data, CyberWell compiled the first-ever open data platform of online antisemitic content.

Academic Perspectives: Gaps in Research on Atrocity Denial and Conspiratorial Self-Victimization

Page 4

Academic Perspectives: Gaps in Research on Atrocity Denial and Conspiratorial Self-Victimization

Researchers conducting academic studies have begun to examine the denial of specific atrocities, most notably October 7. However, they often overlook a broader pattern: denial and conspiratorial self-victimization recur across violent events targeting Jews and Israelis. This pattern is evident in this report’s verified dataset of online social media content. Regardless, the existing research sheds light on the mechanisms through which denial erodes truth and fosters hostility. The Jerusalem Institute of Justice’s (JIJ) Echoes of Denial report[3] frames the denial of atrocity crimes, including the October 7 massacre, as “a severe threat to collective memory and societal stability. It undermines the truth, fuels hatred [emphasis added], and impedes the healing process for victims and survivors”.[4] In this context, fueling hatred can either elicit hate speech online, or worse, real-world harm against Jews and Israelis.

According to the report, denial of the October 7 atrocities contributed to a marked rise in antisemitism. Citing data from the Anti-Defamation League (ADL), the paper notes that “9,354 antisemitic incidents were recorded in 2024, compared to 8,873 in 2023, 3,697 in 2022, and 2,717 in 2021”.[5] These figures underscore how denial of violent events – particularly October 7 – can directly fuel real-world harm.

Bar-Halpern and Wolfman (2025) add a crucial psychological perspective, defining denial as a form of “traumatic invalidation”, which includes “denying atrocities against the Jewish people (e.g., denying/inverting the Holocaust, denying 10/7 or sexual violence against Jews)”.[6] Their work demonstrates that dismissing or inverting such violent events amplifies harm by erasing Jewish suffering from public recognition and portrays Jews and Israelis as “fabricators”. While Bar-Halpern and Wolfman focus on the psychological dimension of denial, their framework also illuminates how these narratives may foster an environment in which antisemitism spreads unchecked and calls for violence are normalized.

The Cycle of Violence

Page 5

The Cycle of Violence

CyberWell’s research highlights that this recurrent pairing of denial and conspiratorial self-victimization does more than distort reality — it fuels real-world harm. Since October 7, these tropes have created echo chambers online that normalize antisemitism and dehumanization of Jews and Israelis, inciting further violence. This cycle of blaming Jews for crises while denying their victimhood when attacked is one of the more dangerous narrative trends observed in the current antisemitic discourse. Yet, platform policies rarely address it directly. As a result, antisemitic denial and conspiratorial self-victimization narratives remain overlooked, leaving a critical gap that allows this content to spread unchecked.

CyberWell’s analysis indicates that denial and conspiratorial self-victimization form part of a cycle that links online rhetoric with real-world danger:

  1. Mobilize Online: Antisemitic actors use social media and messaging platforms to mobilize supporters, coordinate actions, and amplify incitement, often leveraging engagement-driven algorithms.
  2. Fetishize Death and Harm of Jews: Violence against Jews is sensationalized and fetishized, often through recording of the incidents of harassment, harm, and targeted violence, which are then streamed or uploaded to social media – turning real suffering into entertainment.
  3. Deny the Violence: After attacks, claims circulate that the violence is fake, staged, or provoked by victims, re-traumatizing survivors and members of the Jewish community impacted by the targeted antisemitic violence.
  4. Calls for More Violence: Denial often leads to calls for additional attacks, escalating hatred and increasing the risk of further violence.

 

 

In every violent incident examined in this dataset, at least the last three components occurred. The Amsterdam pogrom – a coordinated antisemitic attack on Israeli soccer fans in Amsterdam on November 7, 2024 – exemplified all four: It was organized and mobilized online, the violence was glorified, denial and conspiratorial self-victimization followed, and calls for further violence emerged in the lead-up to the Israel national team’s match in Paris a week later.

 

Mobilize Online:

Anti-Israel demonstrations against the friendly match between the Israeli Maccabi Tel Aviv Football Club and Dutch football club AFC Ajax were organized online, but canceled by the municipality of Amsterdam to avoid clashes and tensions at the stadium and then moved by several of the organizing groups to a different location.

Explanation: Post alerting to alternative demonstration location against the Israeli soccer team on the eve of the match and later pogrom shared by the following Instagram accounts: nojustice_nopeace.nl, free.palestina.nl, week.4palestinanl, week.4palestinenl, br4palestine, p.g.n.l, utrecht4palestine

 

However, during the days leading up to the match – while Israeli fans stayed in Amsterdam – digital evidence indicates that the attacks in Amsterdam were preceded by planned coordination, with multiple reports indicating that much of the organization occurred on Telegram or WhatsApp, as seen in the images below:

From The Times of Israel

 

 

From The Times of Israel

 

Fetishize Death and Harm of Jews & Israelis:

Following the attacks, users engaged in online glorification of violence against Israeli victims. In the tweet below, the user refers to the victims as “filthy Jews” and praises those who carried out the attacks.

 

Deny the Violence/Engage in Conspiratorial Self-Victimization:

In this TikTok post, the speaker disputes the characterization of the November 7, 2024, antisemitic mob attack in Amsterdam as a pogrom. The speaker states: "[...] Only a couple days after the ADL said that I was antisemitic for not calling the football hooligan fight a, you know, a pogrom [...] Israeli mob violence in Amsterdam [...] false narrative insistently advanced by its own newspaper that the Israeli fans were victims of mob violence motivated by anti-Jewish hatred [...]" [00:15-02:43]. Alongside the video, the user adds the following description: “The so-called Amsterdam pogrom wasn’t actually a pogrom after all […]”.

 

Calls for More Violence:

Seven days later following the attacks, a soccer match was held in Paris between the French and Israeli national teams. Leading up to this event, CyberWell found that multiple users were already calling for further violence. Below is one example:

 

The Amsterdam pogrom thus exemplifies how antisemitic narratives evolve and reinforce one another across online spaces. Each stage of the cycle—mobilization, glorification, denial, and renewed incitement—builds upon the previous, sustaining a continuous feedback loop of hate. This incident underscores both the real-world impact of online antisemitism and the speed with which violent rhetoric can translate into physical harm and inspire further attacks.

Methodology: CyberWell’s Categorization of Denial and Conspiratorial Self-Victimization

Page 6

Methodology: CyberWell’s Categorization of Denial and Conspiratorial Self-Victimization

CyberWell’s methodology is grounded in the IHRA Working Definition of Antisemitism. However, beyond IHRA’s specific focus on Holocaust-related denial, the IHRA definition does not reference broader denial or conspiratorial self-victimization narratives surrounding antisemitic violent attacks. In this context, only the fourth and fifth examples of the IHRA working definition (hereinafter, “IHRA Example 4”, “IHRA Example 5”) are directly relevant, as they focus solely on Holocaust denial and distortion:

  • IHRA Example 4: “Denying the fact, scope, mechanisms (e.g. gas chambers) or intentionality of the genocide of the Jewish people at the hands of National Socialist Germany and its supporters and accomplices during World War II (the Holocaust).”
  • IHRA Example 5: “Accusing the Jews as a people, or Israel as a state, of inventing[7] or exaggerating the Holocaust.”

Conspiratorial self-victimization, which is currently only addressed in the context of the Holocaust, remains outside of existing definitional frameworks. While the IHRA definition includes examples of scapegoating, these do not capture the specific dynamics of conspiratorial self-victimization.

The concept of scapegoating Jews is covered in the third example of the IHRA working definition (hereinafter, “IHRA Example 3”):

  • IHRA Example 3: “Accusing Jews as a people of being responsible for real or imagined wrongdoing committed by a single Jewish person or group, or even for acts committed by non-Jews.”

This pertains to collective blame for real or imagined wrongdoing, often in connection with major world events, disasters, and tragedies, rather than instances in which Jews are victims of a tragedy and are subsequently accused of fabricating it or causing it. In other words, IHRA Example 3 covers scapegoating, but not conspiratorial self-victimization.[8]

To address this gap, CyberWell developed a categorization system designed to specifically track and analyze these narratives. The following categories are labeled CyberWell 1-4 (hereinafter “CW1-4”) to distinguish them as part of CyberWell’s framework:

  • CW1 – Denial of violent events against Jews: Outright rejection or minimization of antisemitic violence, including the denial of scope and sexual assault as part of an attack against Jews.
  • CW2 – Denial of violent events against Israelis: Outright rejection or minimization of antisemitic violence, including the denial of scope and sexual assault as part of an attack against Israelis.
  • CW3 – Conspiratorial self-victimization against Jews: Blaming Jews for being responsible or committing a violent attack on themselves.
  • CW4 – Conspiratorial self-victimization against Israelis: Blaming Israelis for being responsible or committing violent attack on themselves.

Notes:

  1. Many examples fall into more than one category simultaneously. For example, content labeling the October 7 massacre as a “hoax” (CW1) often appears alongside claims that Jews staged the attack for political gain (CW3).[9]
  2. These categories can also intersect with antisemitic narratives under the IHRA Working Definition of Antisemitism, where denial and conspiratorial self-victimization overlap with classic antisemitic tropes. Such narratives may incorporate broader conspiracy theories, reinforce antisemitic tropes, or rely on harmful generalizations. For instance, a user might blame Jews for a shooting against the Jewish community while concurrently claiming that all Jews are “Satanic”.

The specific overlaps and examples will be explored in detail throughout this report. To examine these patterns, CyberWell carried out a focused analysis, presented in the following section.

Data Collection & Scope

Page 7

Data Collection & Scope

From November 7, 2024, to August 28, 2025, CyberWell examined antisemitic content that both denied violent attacks on Jews and Israelis and propagated conspiratorial self-victimization claims asserting that they orchestrated the attacks themselves. This start date was selected with intention. The Amsterdam pogrom that occurred that day marked a critical turning point as the first major act of antisemitic violence following October 7. Denial and conspiratorial self-victimization narratives once again dominated the discourse—a pattern consistently observed after each subsequent violent event. This moment demonstrated that October 7 was not an isolated case but the onset of a recurring trend. It also illustrated how such narratives can fuel further incitement, as evidenced by calls for violence at the Paris match the following week.

A total of 308 posts were verified as antisemitic, centered around these narratives, across five major platforms: Facebook, Instagram, TikTok, X, and YouTube. The content, in both English and Arabic, was collected using CyberWell’s methodology which includes AI-powered detection, keyword-based tracking, and manual review by trained analysts. Each post in the dataset was evaluated using CyberWell’s categorization of denial and conspiratorial self-victimization (CW) as well as the IHRA Working Definition of Antisemitism.

List of Violent Events

Across all collected posts, the discourse consistently revolved around the following violent attacks targeting Jews and/or Israelis:

Note: While some of these attacks occurred prior to November 7, 2024 — such as the 1994 London Israeli Embassy bombing — all posts included in this dataset were collected beginning on that date. This time frame was selected to analyze how denial and conspiratorial self-victimization narratives emerged and spread online after November 7, 2024, irrespective of when the original violent event took place.

A car bomb targeted the Israeli Embassy in London, injuring twenty civilians, followed by a second bomb outside Balfour House, a Jewish community building.

Hamas launched a large-scale terrorist attack in southern Israel, killing over a thousand people, namely civilians.

A violent mob targeted Israeli soccer fans in Amsterdam that lead to physical assault, multiple injuries, hospitalizations and a rescue flight.

The Adass Synagogue in Melbourne was deliberately set on fire in an arson attack, causing severe damage to the building.

Three buses were destroyed by onboard bombs, while two additional buses in the nearby city of Holon were found with identical devices that failed to detonate.

Cody Balmer set fire to Pennsylvania Governor Josh Shapiro’s official residence, reportedly motivated by personal hatred toward the governor and his “plans for Palestine”.

Two Israeli Embassy staff members were killed outside the Capital Jewish Museum in Washington, D.C., when a gunman opened fire during a “Young Diplomats Reception” hosted by the American Jewish Committee as they left the event.

An assailant attacked Jewish participants at a solidarity walk for Israeli hostages using a makeshift flamethrower and Molotov cocktails. The assault injured at least seven people, including the suspect, and an 82-year-old Holocaust survivor that later died from her injuries.

An assailant set fire to the front door of a centuries-old historic synagogue.

A Jewish man was brutally assaulted in Montreal in front of his children.

Jews attending an Israeli hostage vigil in Frankfurt were attacked with red paint.

Platform Distribution

The sample was distributed across platforms as follows, with percentages representing the share of total posts collected:

This distribution demonstrates the prevalence of antisemitic denial and conspiratorial self-victimization posts across social media platforms, with X alone accounting for half of all recorded posts. Additionally, engagement levels on X were notably high, representing 85.5% of total views and 64.3% of all interactions. These figures underscore X’s dominant role in amplifying such content online.

Views & Engagement

The dataset analyzed in this report reflects nearly 14 million views and over 500,000 user interactions.[10] This makes it CyberWell’s second most-viewed verified dataset behind the organization’s earlier study on denial and conspiratorial self-victimization related to the October 7 massacre. In that report, CyberWell analyzed posts which amassed over 25 million views within the first three months following that event. These findings reaffirm that denial and conspiratorial self-victimization remain among the most pervasive manifestations of antisemitism online.

Below is a breakdown of the total engagement observed:

Comments | 21,417

Likes | 408,016

Shares & Retweets[11] | 85,574

Views[12] | 13,759,094

 

These levels of engagement also highlight the urgency for platforms to detect and moderate violative content following violent events. Denial and conspiratorial self-victimization posts are often strategically framed as expressions of “skepticism”, enabling them to evade moderation and gain traction before detection. When left unaddressed, these narratives rapidly escalate, normalizing antisemitism and fueling real-world harm.

Additionally, U.S. events (the D.C. shooting and the Boulder attack) accounted for 35.4% of the dataset, yet drew 80.3% of views and 56% of engagement. This reflects how antisemitic narratives achieve disproportionate traction around U.S.-linked incidents. CyberWell’s assessment is that this pattern is driven by antisemitic influencers with large, primarily English-speaking audiences, who are more likely in a U.S. context to deny that antisemitic attacks occurred or to claim that Jews orchestrated attacks against themselves.

 

Content Availability & Removal Rates

CyberWell analyzed the removal rates for all 308 posts in the dataset, including those identified as policy-violating, to evaluate how platforms address denial and conspiratorial self-victimization in the context of violent events targeting Jews and Israelis.

 

Removal Rates Across the Dataset:

TikTok | 34.9%[13]

Meta (Facebook & Instagram)[14] | 20.7%

YouTube | 13.6%

X | 13.5%

Overall Removal Rate | 18.5%.

Removal Rates for Policy-Violating Posts:

TikTok | 34.9%[15]

X | 25%

Meta (Facebook and Instagram) | 21.6%

YouTube | 20%

Overall Removal Rate | 26.9%.

 

Key Findings:

  • Only 5% of all posts in the dataset – and 26.9% of those identified as policy-violating – were removed. These figures remain significantly below the 50% removal rate documented in CyberWell’s 2024 Annual Report.
  • X hosted the largest share of denial and conspiratorial self-victimization content (50.7% of the dataset), and the highest level of engagement and views, yet removed just 25% of policy-violating posts despite consistent reporting by CyberWell to the platform.
  • YouTube demonstrated a similar removal rate (20%) but across a smaller dataset, suggesting either lower prevalence or more effective proactive moderation.
  • Meta removed nearly 21% of denial and conspiratorial self-victimization content. The platform’s community standards currently do not classify conspiratorial self-victimization as a policy violation.
  • TikTok recorded the highest removal rate (9%), though this remains low compared with the platform’s general enforcement rates for antisemitic content.[16]

October 7 vs. Other Events

It is noteworthy that 47.4% of the 308 posts in the dataset focused on denial or conspiratorial self-victimization narratives surrounding October 7.

CyberWell analyzed removal rates for this content to assess how platforms enforce their policies when addressing one of the most extensively documented recent antisemitic atrocities – the deadliest day for Jews since the Holocaust. The findings reveal exceptionally low enforcement: the overall removal rate for October 7 denial and conspiratorial self-victimization content was only 17.8%, while the removal rate for denial alone was just 25.7%. These figures contrast sharply with CyberWell’s earlier monitoring, conducted ten months after October 7, in which a 36.4% removal rate was recorded for comparable denial content.

There are likely two main explanations for this gap:

1. Enforcement Timeframe:

In CyberWell’s earlier monitoring, platforms had a longer period to act on reported content before removal rates were assessed. In the current dataset, many posts were reviewed over a shorter timeframe – often only one to two weeks – leaving less time for enforcement actions to be reflected. Nevertheless, allowing harmful content to remain online for extended periods is unacceptable. Platforms must enforce their policies swiftly to prevent denial and conspiratorial self-victimization narratives from spreading and inflicting further harm.

2. Declining Moderation Over Time:

Platforms appear to have applied stricter enforcement in the immediate aftermath of October 7 but have since seemingly relaxed their approach. This erosion of moderation standards is deeply concerning. Denial of violent antisemitic events should be treated with the same zero-tolerance threshold as Holocaust denial, as both serve to erase Jewish victimhood and—particularly in the case of October 7—can perpetuate real-world harm. The difference lies primarily in platform policy. As documented in CyberWell’s Annual Report (2024), platforms explicitly prohibit Holocaust denial and distortion, which accounts for their comparatively higher removal rates. By contrast, most platforms, with the exception of TikTok, have not yet clearly classified October 7 denial under existing hate speech policies, leading to inconsistent enforcement despite its similar impact.

When measured against other violent incidents, October 7 continues to dominate discourse, with denial and conspiratorial self-victimization narratives still circulating more than two years later. Moreover, these narratives have increasingly appeared in subsequent episodes such as the shooting of two young Israeli diplomats and AJC lay leaders outside of the Capital Jewish Museum in Washington, D.C. October 7 thus functions as a recurring reference point for antisemitic denial and conspiratorial self-victimization across new events.

Narrative Overlaps & Patterns

CyberWell’s analysis identified CW3 – Conspiratorial self-victimization against Jews – as the most prominent category in this dataset, appearing in 88% of posts. This demonstrates that the narrative inversion of blaming Jews for violence committed against them remains the dominant form of denial and conspiratorial self-victimization within antisemitic discourse.

In addition, an overlap between CW3 and the second example of the IHRA working definition (hereinafter, “IHRA Example 2”) was observed in 16% of the posts. IHRA Example 2 is defined as:

“Making mendacious, dehumanizing, demonizing, or stereotypical allegations about Jews as such or the power of Jews as collective – such as, especially but not exclusively, the myth about a world Jewish conspiracy or of Jews controlling the media, economy, government or other societal institutions”.

IHRA Example 2 forms the foundation of all major platforms’ hate speech policies, as it clearly targets Jews – a protected group – through generalizations and conspiratorial claims. By definition, such content violates community guidelines. Consequently, when Jews are victim-blamed, these narratives are often paired with harmful generalizations or conspiracy theories – for example, claims that Jews control the media, are inherently evil, or are predisposed to commit acts of violence.

Denial and Conspiratorial Self-Victimization: CW Category Breakdown

Page 8

Denial and Conspiratorial Self-Victimization: CW Category Breakdown

This section examines each category of denial and conspiratorial self-victimization (CW1–CW4), detailing their prevalence in the dataset and the roles they play within antisemitic discourse.

CW1 – Denial of Violent Events Against Jews

This category encompasses narratives that outright reject or minimize violent attacks targeting Jews. In this dataset, CW1 included claims that violent events against Jews never occurred, that their scale was exaggerated, or that specific atrocities – such as instances of sexual violence – did not take place. These forms of denial aim to erase Jewish victimhood and undermine documented evidence of antisemitic violence.

In this YouTube video below, titled: ”Oct 7 r*pe hoax debunked... again“, a known denier of October 7, Max Blumenthal, repeatedly denies the mass sexual assaults committed against Jews[17] on October 7. He refers to the “mass rape deception of October 7th” [00:15-00:16] and claims, ”there are zero complainants of alleged cases of rapes committed by Palestinians on October 7th, and no organized chains of evidence [...] Israel still can't find any October 7th rape victims, there's no testimony, no evidence [...]” [00:34-00:53]. The video promotes clear sexual violence denial, dismissing documented accounts from survivors and witnesses, and attempts to delegitimize evidence of these crimes.

In response to a tweet about the Frankfurt red paint assault (August 23, 2025), the X user below dismissed the incident as fake, arguing that the photo could not be genuine because there was no paint visible on the victim’s glasses.

 

CW2 – Denial of Violent Events Against Israelis

This category encompasses narratives that deny or minimize violent attacks targeting Israelis. Posts categorized under CW2 often reject the occurrence of attacks altogether or downplay their scale. By erasing or diminishing these events, CW2 content seeks to delegitimize Israeli victimhood and undermine recognition of antisemitic violence.

In the TikTok video below from November 8, 2024, the speaker dismisses reports of antisemitic violence against Israelis in Amsterdam a day earlier, stating: “[...] And now of course the Israeli officials and Zionists are claiming there is a pogrom on the streets. [...] Typical and textbook Israeli gaslighting. Commit crimes, claim victimhood and antisemitism [...] You may see many more of these incidents happening in the future, so be mindful and don't give them the card” [00:58-01:54]. By referring to these events as “Israeli gaslighting” and alleging that Israelis (and by extension, Jews) “commit crimes” and then “claim victimhood”, the speaker is engaging in violent event denial. This framing delegitimizes the antisemitic nature of the attacks, implying they are fabricated or exaggerated to serve political purposes. It portrays Jews and Israelis as manipulative actors who exploit accusations of antisemitism to shield themselves from criticism. The suggestion that more such “fabricated” incidents will occur in the future reinforces the narrative that Jewish victimhood is manufactured, further undermining real experiences of antisemitism and normalizing hostility toward Jewish communities.

 

Similar to this now-removed TikTok video, the speaker denies that recent violent incidents against Jews in Amsterdam were antisemitic, stating there is an “[...] overwhelming amount of evidence that's on social media that completely disproves the mainstream media narrative that these are pogroms, that these attacks were antisemitic [...] Fuck this fucking world, man [...] [00:46-01:39]. By asserting that the events were not pogroms and not antisemitic, the speaker engages in pogrom denial, erasing the targeted nature of the violence against Israelis in Amsterdam.

 

CW3 – Conspiratorial Self-Victimization Against Jews

This category encompasses narratives that invert responsibility for violence, portraying Jews as the perpetrators rather than the victims of an antisemitic attack. CW3 posts claim that Jews staged, provoked, or carried out the violence for their own benefit. This form of narrative inversion is the most prevalent in the dataset, appearing in 88% of posts.

In the Facebook post below, the user advances antisemitic conspiracy theories and conspiratorial self-victimization rhetoric related to the October 7 attacks. The post invokes the antisemitic “fake Jews” theory, which falsely asserts that Jews are imposters, and further claims that Jews orchestrate false flag operations to manipulate global events. The attached meme reinforces these ideas, which suggests that Israeli forces, rather than Hamas, were responsible for the deaths of Jews on October 7. Together, the post and image promote demonstrably false narratives that deny Hamas’s role in the October 7 massacre and portray Jews and Israel as the aggressors, directly advancing antisemitic conspiracy theories.

 

The tweet below similarly implies that a Jew was responsible for the arson attack on the East Melbourne Synagogue on July 4, 2025. In response to another tweet questioning the suspect, the user replied: “Found him” accompanied by an image of a religious Jew wearing a kippah and sidelocks, who appears to have Down Syndrome. This post not only constitutes conspiratorial self-victimization against Jews but also relies on a harmful generalization portraying Jews – including religious Jews – as having genetic disorders.

 

CW4 – Conspiratorial Self-Victimization Against Israelis

This category refers to narratives that blame Israelis for violent attacks committed against them. Rather than acknowledging Israelis as targets of antisemitic violence, CW4 posts allege that the attacks were staged, self-inflicted, or otherwise orchestrated by Israelis for political or strategic purposes. While less common than other categories, CW4 illustrates how denial and narrative inversion extend specifically to Israelis, portraying them as perpetrators rather than victims.

In this Facebook post, the user perpetuates conspiratorial self-victimization narratives and accuses Jews and Israelis of killing their own civilians during the October 7 massacre. The post further generalizes them as "always" creating false flag operations. A “false flag” refers to an attack carried out to disguise the true perpetrators and blame another party. In antisemitic discourse, this term is often misused to claim that Jews or Israelis stage violent attacks against themselves to garner sympathy, political objectives, or justify subsequent actions.

 

In this Facebook Reel, the user references the Bat Yam bus bombing from February 20, 2025, claiming that “Zionists” – used here to mean Israelis – were behind the attack as part of an alleged pattern, rather than acknowledging it as a genuine act of terror.

Comment Analysis: How Antisemitic Users Respond

Page 9

Comment Analysis: How Antisemitic Users Respond

Antisemitic narratives are not limited to original posts but also appear frequently in the comment sections. CyberWell’s monitoring shows that users often post antisemitic comments not only under explicitly antisemitic content but also beneath neutral posts and even mainstream news reports. While antisemitic responses in comment sections are a major pain point in online antisemitism—often featuring dehumanization, mocking of victims, and coded language such as the widespread use of pigs and bars of soap—comment threads also serve as a key pathway for denial and conspiratorial “self-victimization” narratives to enter public discourse in reaction to violent events against Jews.

In the Facebook post below, a user shared a video about the shooting of two Israeli Embassy staffers in Washington, D.C., with the caption “Thoughts???”. In the comments on her own post, she added: “Inside job?” and “We know how much they love killing their own”. By placing these antisemitic conspiratorial self-victimization remarks in the comment section rather than in the original caption, the user appears to attempt evasion of moderation while keeping the post publicly visible.

Insights & Patterns

Page 10

Insights & Patterns

  • Event overlaps: Around 13% of the dataset advances a recurring “false-flag” narrative, alleging that Jews – portrayed as “manipulative” or “evil” – staged events, including instances of self-directed harm, to influence public perception. These claims often linked multiple incidents, ranging from the D.C. shooting and October 7 attacks, to other occurrences dating back to the 1990s, such as the Israeli Embassy bombing in London in 1994. In dozens of posts, users denied that specific violent incidents had occurred or claimed they were staged by Jews or Israelis. These narratives were then linked to earlier antisemitic attacks and framed as part of an alleged recurring “pattern”.

The tweet below illustrates this insight clearly, with the user alleging that Jews not only staged the Boulder Molotov Attack on June 1, 2025, but were also responsible for the D.C shooting that resulted in the murder of two Jews.

 

  • Use of “Zionist” as a slur: The term “Zionist” appeared frequently across the dataset, often used not to describe political ideology, but as a proxy for Jews and Israelis, or simply as a derogatory label. This aligns with CyberWell’s broader observation that “Zionist” is routinely deployed as a slur in multiple antisemitic narratives.

The user below refers to Israel – and by extension, Jews – as “Zionist bastards”.

  • False attribution of Jewish identity: A recurring form of conspiratorial self-victimization involved falsely assigning Jewish identity to perpetrators of violence, even without evidence. These attributes were often delivered in mocking or cynical tones. For example, following the Washington, D.C. shooting committed by Elias Rodriguez, some users falsely claimed he was Jewish because his first name resembled the Hebrew name “Eliyahu”. Although sometimes framed as jokes, such claims ultimately serve to deflect blame onto Jews—even when they are the victims of the attack.

 

Similarly, the user below claims that the perpetrator of the attack in Boulder, Colorado, was Jewish, describing the incident as a “false flag” and calling for an investigation into his alleged Jewish ancestry.

 

  • Recurring terms of conspiratorial self-victimization: Specific phrases repeatedly surfaced across the dataset, including “psyop”, “inside job”, “false flag” (the most dominant), and “staged”. These terms frame attacks as orchestrated or staged, thereby casting doubt on Jewish and Israeli victimhood.
  • Cultural mockery: In addition to denial and conspiratorial self-victimization narratives, the dataset also contained elements of cultural mockery, in which Jewish identity was ridiculed through terms such as “Cohencidence” and “Oy vey”. “Oy vey” is a Yiddish expression of dismay or frustration that has become strongly associated with Jewish identity. When used in antisemitic posts, it functions as a caricature of Jewish speech, reducing Jewish culture to a stereotype for ridicule. Similarly, “Cohencidence”, a play on the common Jewish surname Cohen, is used in conspiracy spaces to mockingly suggest that Jews are secretly behind major events. Such expressions minimize Jewish suffering by turning it into a punchline and normalizing antisemitism in everyday discourse.

Policy Analysis

Page 11

Policy Analysis

Platform Enforcement Gaps on Denial and Conspiratorial Self-Victimization

As the low social media platform removal rates above demonstrate, addressing denial and conspiratorial self-victimization remains a persistent challenge. TikTok is the only platform that currently prohibits all forms of these types of content. Through in-depth analysis of the report dataset, CyberWell identified several policy and enforcement gaps that help explain these extremely low removal rates.

Denial

Denial of violent events is formally prohibited across all major platforms:

  • X (Twitter): Under its Abuse and Harassment policy, X includes a clause titled “Violent Event Denial”, which states: “We prohibit content that denies that mass murder or other mass casualty events took place, where we can verify that the event occurred, and when the content is shared with abusive context. This may include references to such an event as a “hoax” or claims that victims or survivors are fake or “actors.” It includes, but is not limited to, events like the Holocaust, school shootings, terrorist attacks, and natural disasters”.
  • Meta: Under its Bullying and Harassment policy, Meta classifies denial of violent events as violating content, prohibiting: “Claims that a violent tragedy did not occur” and “Claims that individuals are lying about being a victim of a violent tragedy or terrorist attack, including claims that they are acting, pretending to be victims, or paid or employed to mislead people about their role in the event”.

In addition, Meta explicitly prohibits Holocaust denial and distortion under its Hateful Conduct policy, within the category of “Harmful stereotypes historically linked to intimidation or violence.”

  • YouTube: Under its Hate Speech policy, YouTube prohibits the “Denial or minimization of a well-documented, major violent event or the victimhood of such an event”.
  • TikTok: TikTok’s Hate Speech and Hateful Behavior policy prohibits “Denying or minimizing well-documented historical atrocities against protected groups, such as the Holocaust or the genocide against the Tutsi in Rwanda.”
    Specifically regarding October 7, TikTok expanded its enforcement to recognize both general denial of the event and denial of specific, well-known details as violative content.

Despite these policies, CyberWell’s data reveal that enforcement remains inconsistent, with not all forms of violent-event denial being addressed uniformly across platforms.

I.  Denial of Specific Elements Vs. Comprehensive Denial

Platforms tend to remove only outright denials of entire events (e.g., “There was no October 7 attack”). However, posts disputing specific aspects – such as “There were no sexual assaults on October 7” – often remain online, even when those claims contradict verified findings by international bodies such as the United Nations.

CyberWell’s analysis indicates that denial of sexual assault during the October 7 attacks has not prompted consistent enforcement across platforms. This represents a significant policy gap: much of today’s denial content does not reject the occurrence of entire events but rather targets well-documented components – thus normalizing antisemitic narratives and foster continued hostility toward Jews and Israelis.

II.  Definition of “Major Violent Event”

Both YouTube and X limit enforcement to the denial of “major violent events” or “mass-casualty” incidents.

This narrow framing excludes smaller-scale antisemitic attacks or incidents specifically targeting Jews and Israelis, despite the nature of these repeat targeted attacks being indicative of the way that antisemitism is currently spreading on a global scale.

Consequently, content that questions victims’ legitimacy or mocks the event’s authenticity often remains online because it does not meet the platforms’ technical criteria. In antisemitic contexts however, denial is not merely a factual distortion – it serves to erase Jewish suffering and justify further hostility.

III.  Expressions of Doubt or “Questioning”

Across platforms, users frequently express skepticism about antisemitic violence. Such content often remains online, as it is interpreted as non-violative “doubt”.
CyberWell’s data indicates that, unlike in Holocaust denial contexts, this type of content is rarely enforced, despite its role in normalizing and minimizing real-world violence against Jews.

 

Conspiratorial Self-Victimization

Among mainstream social media platforms, TikTok is the only one that prohibits all forms of conspiratorial self-victimization as part of its harmful misinformation policy. According to TikTok’s Community Guidelines, “conspiracy theories or hoaxes that could cause significant harm, such as those that make a violent call to action or have links to previous violence”, are prohibited and considered policy-violating content.

By contrast, other platforms do not address conspiratorial self-victimization at all in their policies, except in relation to the Holocaust, where such narratives are explicitly prohibited and consistently well-enforced. For other violent events targeting Jews or Israelis, conspiratorial self-victimization remains largely unaddressed. Its most common manifestation is the “false flag” claim, in which Jews or Israelis are accused of staging violent attacks against themselves to gain sympathy, secure political advantage, or justify future actions. On this point, TikTok again stands out as the only platform that both prohibits such content and renders the term “false flag” non-searchable. This reflects a critical enforcement gap: while Holocaust-related conspiracies are recognized and penalized, modern antisemitic conspiracy narratives remain largely unchecked.

Recommendations

Page 12

Recommendations

1. Adopt explicit policies against violent event denial

Platforms should clearly define denial of violent attacks against Jews and Israelis as prohibited content, in the same way Holocaust denial is banned. This definition must include all forms of denial, including outright rejection of the event, minimization of its scope, and denial of documented atrocities such as mass killings and sexual violence on October 7.

 

2. Include all well-documented violent events and address the “mass-casualty” threshold

Policies should be broadened so that denial of any well-documented violent event constitutes a violation, regardless of scale or public recognition. Currently, YouTube emphasizes “major” violent events, and X limits enforcement to “mass murder or other mass-casualty events”. This narrow framing leaves significant gaps, especially given the nature of how targeted antisemitism is currently spreading across the world, and should be expanded to cover well-documented events of all scales and phrasings against Jews.

 

3.  Explicitly prohibit antisemitic conspiratorial self-victimization

Platforms should adopt TikTok’s approach by recognizing conspiratorial self-victimization as violative in all its forms. While some hate speech and harassment policies prohibit “mocking victims” of violent events, this framing is insufficient. Antisemitic conspiratorial self-victimization has evolved beyond mockery: many posts now acknowledge the violence while delegitimizing it by questioning the victimhood or reality of the victims. Policies must be adapted to address this shift in order to prevent secondary harm to Jewish communities in the aftermath of violent events.

 4. Develop stronger detection tools for denial and conspiratorial self-victimization

Platforms should build systems to automatically flag common keyword combinations associated with these narratives, for reviewer triage and proactive intervention.

Appendix

Page 13

Appendix

This appendix presents selected examples of each antisemitic narrative analyzed in this report. These cases illustrate recurring patterns observed across the dataset and demonstrate how denial and conspiratorial self-victimization are expressed in practice. By monitoring such patterns, platforms and researchers may be better equipped to identify additional posts that replicate these themes and intervene more effectively.

CW1 – Denial of Violent Events Against Jews

 

CW2 – Denial of Violent Events Against Israelis

 

CW3 – Conspiratorial Self-Victimization Against Jews

 

CW4 – Conspiratorial Self-Victimization Against Israelis

 

Footnotes

Page 14

Footnotes

[1] Troschke, Hannah. “Holocaust Distortion and Denial”. In Decoding Antisemitism, ed. Monika J. Becker et al. (2024), 239–243.

[2] See e.g., Deborah E. Lipstadt. Denying the Holocaust: The Growing Assault on Truth and Memory. (1993), 26–27; Wistrich, Robert Solomon. “Introduction: Lying about the Holocaust”. In Holocaust Denial: The Politics of Perfidy. (2016) 12, 25.

[3] Jerusalem Institute of Justice. Echoes of Denial: How Atrocity Denial Fuels Antisemitism After October 7. (2025), https://jij.org/wp-content/uploads/2025/07/Echoes-of-Denial-2025.pdf.

[4] Ibid., 4.

[5] Ibid., 7.

[6] Bar-Halpern, G., & Wolfman, A. Traumatic Invalidation in the Jewish Community after October 7. Journal of Human Behavior in the Social Environment. (2025), 4. https://www.tandfonline.com/doi/full/10.1080/10911359.2025.2503441.

[7] Per IHRA, the notion that Jews ‘invented the Holocaust’ also encompasses the false claim that Jews caused their own genocide.

[8] When reviewing IHRA’s Handbook, as well as commentaries by the Antisemitism Policy Trust (p.7), Canadian Handbook, and WJC, all explain Example 3 as cases where Jews are blamed for events in which they were not the victims - none mention examples Jews being blamed for tragedies directly targeting them. As explained in the WJC analysis, IHRA 3 reflects classic antisemitism, where Jews were scapegoated for society’s disasters. In contrast, conspiratorial self-victimization and false-flag theories are more modern phenomena, developing mainly in the 19th–20th centuries. At the time IHRA adopted its 11 examples (2016), conspiratorial self-victimization against Jews was nearly non-existent outside Holocaust distortion. Its resurgence has occurred after October 7, where we now see it following every violent event against Jews and Israelis.

In addition, as part of our rationale that IHRA’s 11 examples reflect distinct narratives of antisemitism, CyberWell treats the examples as mutually distinct categories. For instance, IHRA 11 (holding Jews collectively responsible for Israel’s actions) could theoretically overlap with IHRA 3, yet we separate them because IHRA 11 reflects a more modern, Israel-related antisemitism, while IHRA 3 reflects classic antisemitism. Likewise, if IHRA 3 included conspiratorial self-victimization, then blaming Jews for the Holocaust would fall under type 3. Instead, IHRA addressed Holocaust conspiratorial self-victimization separately under Example 5. As noted, in 2016 conspiratorial self-victimization against Jews was nearly non-existent outside Holocaust distortion – likely why this is the only conspiratorial self-victimization example included in IHRA. This distinction reinforces that IHRA 3 deals only with scapegoating.

Although not entirely separate categories, conspiratorial self-victimization follows a different rationale than scapegoating. Scapegoating assigns Jews responsibility for broad societal disasters, whereas conspiratorial self-victimization targets Jews specifically as victims – mocking them for their identity, blaming their nationality for the tragedy, and ultimately denying that Jews and Israelis have the right to be recognized as victims at all.

[9] CyberWell classifies the October 7 massacre as a violent attack against Jews, not only due to the background of the victims, the vast majority of whom were Jews, but also due to the intention of Hamas to deliberately target and harm Jews. Hamas’ core charter, the 1988 Hamas Covenant, lays out their intention to commit genocide against the global Jewish community. Additionally, CyberWell observed footage recorded by Hamas on October 7, in which the terrorists explicitly stated that they came to murder Jews. For further information see our extensive report see our report “Denial of the October 7 Massacre on Social Media Platforms” (pp. 7–8).

[10] This total includes the sum of all comments (21,417), likes (408,016), and combined shares and retweets (85,574) recorded across all platforms. Views are counted separately.

[11] This figure combines both retweets (X) and shares (Meta, TikTok, and YouTube).

[12] View metrics were not calculated for Facebook posts unless they were videos. On Facebook and Instagram, view counts apply only to videos and reels.

[13] After the dataset was finalized, TikTok conducted a review and re-examination of the content, after which the removal rate increased to 93%, with the posts taken down for violating its harmful misinformation policy.

[14] While Facebook and Instagram are listed separately in the platform distribution data, their removal and enforcement rates are reported together under “Meta”, since both platforms are owned and operated by the same company. This grouping reflects how content moderation policies and enforcement practices are applied across Meta platforms.

[15] See footnote 13.

[16] See footnote 13. In CyberWell’s 2024 Annual Report, the overall removal rate on TikTok was 65.1%, while the removal rate for policy-violating posts reached 86%.

[17] For a detailed explanation of why we categorized this as a violent event against Jews and not against Israelis, see footnote 9.

Share this content

Facebook
X
LinkedIn
Email
WhatsApp

More Reports

Regarding Meta Oversight Board Cases Involving Coded Language and Racial Discrimination...
Following Meta’s Oversight Board review of new cases involving the use of emojis and antisemitic code words to target protected…

December 9, 2025

Antisemitism Online Amid National Elections (2024-2025)
In the lead-up to national elections in the United Kingdom (U.K.), the United States (U.S.), Canada, and Australia between 2024…

September 11, 2025

Antisemitic Incitement & Hate Online | Aftermath of the Capital Jewish...
Following the horrific and fatal shooting at the Capital Jewish Museum in Washington, D.C., in May of 2025, CyberWell’s team…

May 27, 2025