Antisemitism Online Amid National Elections (2024-2025)

In the lead-up to national elections in the United Kingdom (U.K.), the United States (U.S.), Canada, and Australia between 2024 and 2025, CyberWell conducted a cross-country study of online antisemitic discourse related to political candidates and parties. This report reveals that a significant portion of this content, despite violating social media platform guidelines, remained online.
Total 15 pages

Executive Summary

Page 1

Executive Summary

  • In the lead-up to national elections in the United Kingdom (U.K.), the United States (U.S.), Canada, and Australia between 2024 and 2025, CyberWell conducted a cross-country study of antisemitic discourse in English and Arabic related to political candidates, parties, and electoral systems.

 

  • The dataset in this report is based on 338 pieces of verified antisemitic content collected from Meta (Facebook and Instagram), TikTok, X, and YouTube, each containing anti-Jewish tropes and conspiracy theories connected to recent election cycles. CyberWell analyzed and verified this content as antisemitic, according to the globally accepted IHRA (International Holocaust Remembrance Alliance) definition of antisemitism.

 

  • Together, these posts amassed more than 3.5 million views and generated nearly 230,000 user interactions. Although 92.3% of the dataset violated the platforms’ community guidelines and were reported for policy violations, only 23.6% of the violative posts were removed in real time, when the content was first discovered during the relevant election cycle. At the writing of this report, just 37.8% had been removed, despite several months passing since the time of reporting to platforms. Both figures are lower than the 50% average removal rate documented in CyberWell’s 2024 annual report.

 

  • X hosted the largest share of antisemitic election-related content (66.3%) and saw the highest engagement (195,253) but also had the weakest enforcement: only 27% of violative posts were removed at the time of this report. This reflects X’s permissive policy approach, which classifies much of the false information posted on the platform about politics or elections as “controversial viewpoints” rather than policy violations. By contrast, YouTube is currently the only major platform with an explicit clause addressing election-related hate speech – likely contributing to it having the lowest volume of antisemitic election content in the dataset (1.5%).

 

  • Although U.S. election content made up only 38.8% of the dataset, it accounted for 90.1% of total views and 85.5% of total interactions, indicating a disproportionate level of engagement and user interest compared to content related to other countries’ elections. This pattern supports CyberWell’s assessment that antisemitic content tends to gain greater visibility around U.S.-related events. One likely contributing factor is that English-speaking antisemitic influencers disproportionately focus on developments in the U.S. and Israel rather than in other regions.

 

  • The overwhelming prevalence of the second example of the IHRA working definition (hereinafter: “IHRA Example 2”)– appearing in 4% of all posts – underscores that election-related antisemitism is predominantly framed around broad conspiratorial claims about Jewish power and influence. Since IHRA Example 2 is the foundation of most major platforms’ hate speech policies (covering generalizations and conspiracies against protected groups), its dominance highlights both the centrality of this trope in online Jew-hatred and the limited efficiency of current Trust & Safety enforcement efforts. The continued presence of IHRA Example 2, against platforms’ policy, suggests that platforms are failing to apply their own hate speech policies consistently, particularly in context of elections.

 

  • CyberWell documented a range of common antisemitic narratives, including the repeated use of antisemitic images and memes across multiple election cycles, such as the Happy Merchant caricature and other recycled antisemitic memes.

 

  • This report further offers target recommendations for improving election-related policies on social media. These recommendations include adopting explicit hate speech guidelines specific to election contexts, strengthening detection of coded antisemitic language and imagery, and giving similar level of priority to posts containing election-related hate speech as those containing election-related misinformation. Strengthening and consistently applying such measures can contribute to safer and higher integrity online spaces during democratic elections.

Introduction

Page 2

Introduction

National elections are widely recognized as pivotal moments in any democracy. They are periods marked by intensified public discourse, heightened political competition, and elevated social tensions. Unfortunately, as these conditions and tensions are mirrored online, particularly on social media, they have also become periods of increased spreading of anti-Jewish rhetoric on social media.

Between late 2023 to mid-2025, CyberWell monitored online discourse surrounding national elections in four major Western democracies. These included the United Kingdom (U.K.), the United States (U.S.), Canada, and Australia. In doing so, CyberWell identified a clear pattern: increased levels of online antisemitism during elections, related to elections, power, and conspiracy theories about Jewish control of governments.

Election-related antisemitism often centers around the claim that Jewish individuals or groups control political candidates, governments, and electoral systems. Across the four countries analyzed in this report, antisemitic narratives often portrayed politicians as “puppets” of Jewish interests, invoking longstanding conspiracy theories about Jewish control over the media and financial institutions. In many cases, Jews were also scapegoated for certain societal or political developments leading up to Election Day. Such tropes are rooted in a long history of antisemitism, which are resurfaced and amplified during politically charged periods.

This report offers a cross-country analysis of how antisemitism dominated major social media platforms during four national elections. It begins with a brief overview of online antisemitism and hate speech during the period leading up to and during elections, incorporating relevant academic perspectives on the topic. Next, it presents a detailed examination of the dataset used in the analysis, the top narratives prominent across all four electoral cycles, as well as the removal rates of election-related antisemitism and how it is aligned with the platform’s approach on this matter. The report concludes with practical recommendations for improving content moderation, strengthening policy implementation, and guiding future monitoring efforts during upcoming elections.

CyberWell’s Mission

Page 3

CyberWell’s Mission

CyberWell is a non-profit organization dedicated to eradicating online antisemitism through driving the enforcement and improvement of community standards and hate speech policies across social media platforms. Using data-driven monitoring, CyberWell identifies where policies are either not enforced, or insufficient to protect Jewish users from harassment and hate.

The organization’s unique methodology combines antisemitic keyword detection, a specialized dictionary based on the International Holocaust Remembrance Alliance’s (IHRA) definition of antisemitism, and human review. CyberWell’s professional analysts – trained in antisemitism, linguistics, and digital policy – vet each piece of content in relation to the IHRA definition and platform policies. For more about our methodology, check out our policy guidelines.

CyberWell currently monitors Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube in both English and Arabic. The organization serves as a trusted flagger for both Meta (Facebook, Instagram, & Threads) and TikTok, enabling it to escalate policy-violating content and advise content moderation teams directly. As part of its commitment to democratize data, CyberWell compiled the first ever open data platform of online antisemitic content in May 2022.

Online Antisemitism in Election Periods: Academic Perspectives

Page 4

Online Antisemitism in Election Periods: Academic Perspectives

Election periods often coincide with increased political expression across social media platforms. While criticism of politicians and public figures is expected during these times, such discourse can also serve as a vehicle for hate speech and extremism, particularly when the hate speech targets protected groups, including Jewish communities. Antisemitism is frequently disingenuously framed as political critique, adding an additional layer of complication for platforms to effectively detect and remove harmful content.

Additionally, academic research has underscored how often the boundary between political discourse and hate speech is crossed online. A 2022 study[1] of antisemitism on Twitter (now X) investigated “[…] how antisemitic speech manifested itself in political discourse on Twitter in the lead-up to the 2018 US midterm elections”.[2] The study found that longstanding antisemitic tropes, such as claims that Jews control the media or political systems, were regularly embedded in broader political commentary. As the authors note: “[…] age-old antisemitic conspiracy theories concerning Jewish ‘puppeteers’ are rehashed and utilized to express hatred of Jewish people […]”.[3] This tactic allowed such content to evade moderation.

Another study,[4] analyzing Italy’s 2022 general election observed a “[…] prevalence of toxic and hateful interactions among Twitter users in the context of the 2022 Italian election”.[5] The study revealed that a significant portion of political posts contained hate speech, with religious and ethnic groups among the primary targets. As the authors state: “[…] religious and ethnic groups were often the target of hate speech, and these messages were frequently framed as part of mainstream political discourse, making them more difficult to detect and moderate […]”.[6] Together these findings highlight the tendency of hateful narratives to be masked as legitimate mainstream political discourse, particularly during election periods.

This overlap between political and hateful content presents a clear challenge for content moderation— one that CyberWell has identified repeatedly while monitoring election-related antisemitism across multiple countries. Posts that refer to politicians or political movements may still violate hate speech policies, especially when they include dehumanizing language, conspiratorial framing, or harmful generalizations — such as attributing undue influence or negative traits to Jews as a whole. While platforms recognize election periods as sensitive, they tend to focus their monitoring and enforcement efforts primarily on misinformation,[1] with little attention allocated to election-related hate speech. Given CyberWell’s findings, and the record-breaking increase of violent antisemitism, particularly against Jews in Western democracies, addressing hate speech during electoral cycles should be an equal priority.

________________________________________________________________________________________________

[1] For example, some platforms, such as Meta and TikTok acknowledge the tensions that come with election periods but mainly focus on misinformation.
[1] Riedl, M. J., Joseff, K., Soorholtz, S., & Woolley, S. (2024). Platformed antisemitism on Twitter: Anti-Jewish rhetoric in political discourse surrounding the 2018 US midterm election. New Media & Society26(4), 2213-2233. https://journals.sagepub.com/doi/pdf/10.1177/14614448221082122.
[2] Ibid., 2214.
[3] Ibid., 2226.
[4] Pierri, F. (2024). Drivers of hate speech in political conversations on Twitter: the case of the 2022 Italian general election. EPJ Data Science13(1), 63.
https://epjdatascience.springeropen.com/articles/10.1140/epjds/s13688-024-00501-1.
[5] Ibid., 2.
[6] Ibid., 6.

Dataset Overview and Methodology

Page 5

Dataset Overview and Methodology

Between December 1, 2023, to April 22, 2025, CyberWell analyzed antisemitic election-related content across four national elections: the U.K., U.S., Canada, and Australia. A total of 338 posts were verified as antisemitic on this topic among five major platforms: Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube. The content, in both English and Arabic, was collected using CyberWell’s methodology which includes AI-powered detection, keyword-based tracking, and manual review by trained analysts. All content was evaluated using the IHRA Working Definition of Antisemitism.

 

Platform Distribution

The sample was distributed across platforms as follows, with percentages representing the share of total posts collected:

X | 66.3%

Facebook | 25.7%

TikTok | 3.6%

Instagram | 3%

YouTube | 1.5%

__________________________________

[7] For example, some platforms, such as Meta and TikTok acknowledge the tensions that come with election periods but mainly focus on misinformation.

Platform Distribution

Page 6

This distribution reflects the relative prominence of each platform in the context of election-related antisemitism, with X alone accounting for two-thirds of all recorded posts. This underscores X’s outsized role in spreading election-related antisemitic narratives across social media.

Sample Breakdown by National Election

The breakdown below shows the number of posts associated with each country’s election:

U.S | 38.8%

Canada | 25.4%

Australia | 23.7%

UK | 12.1%

Views & Engagement

Page 7

Views & Engagement

Out of the 338 antisemitic posts analyzed across the four national elections, over 3.5 million views and more than 229,062 user interactions[1] were recorded across five major social media platforms. This level of engagement underscores the extent to which antisemitic narratives were not only present but actively circulated and consumed during these election periods.

 

Below is a breakdown of the total engagement observed:

Comments | 9,642

Likes | 174,269

Shares & Retweets[2] | 45,151    

Views[3] | 3,553,353     

High engagement rates indicate that antisemitic posts are not only being shared during election time but are also reaching – and resonating with – large audiences during democratic elections. This underscores the urgency for platforms to detect and moderate violative content early, especially when such content is strategically framed to evade moderation or go viral.

Notably, the overwhelming majority of views and engagement in the sample stem from U.S. election-related content. Although this content represents 38.8% of the entire dataset, it accounts for 90.1% of total views and 85.5% of total engagement— a disproportionately high level of interaction, compared to election content from other countries. This aligns with CyberWell’s broader observations that antisemitism often gains greater traction in response to real-world events linked to the U.S. This is likely due to the fact that increasingly openly antisemitic users and ‘influencers’ who post in English, generally focus on U.S. and Israeli developments rather than events in other countries.

 

________________________________________________________________

[1] This total includes the sum of all comments (9,642), likes (174,213), and combined shares and retweets (45,137) recorded across all platforms. Views are counted separately.
[2] This figure combines both retweets (X) and shares (Meta, TikTok, and YouTube).
[3]  View metrics were not calculated for Facebook posts unless they were videos. On Facebook and Instagram, view counts apply only to videos and reels.

Content Availability and Removal Rates

Page 8

Content Availability and Removal Rates

CyberWell documented the availability status of each antisemitic post at two distinct points in time:

  1. Real-time monitoring – When the content was first detected during the relevant election cycle.

NOTE: CyberWell’s methodology includes a multi-week window to account for moderation lag before publishing removal rates in prior reports.

  1. Current availability – A follow-up check conducted at the time of this report’s publication, after a multi-week moderation window.

 

These two benchmarks provide a comparative view of platform responsiveness—both during and after the election periods.

NOTE: Removal rate data was not recorded in real time for the U.K. election in 2024. As a result, real-time removal rates reflect only the 297 posts collected from Australia, Canada, and the U.S.. The current removal rates, however, reflect all 338 posts, including the U.K.. For transparency, the tables below list the adjusted and total post counts per platform.

Real-Time Removal Rates

Excluding U.K. data (297 total posts)

Current Removal Rates

Including U.K. data (338 total posts)

It is important to note that CyberWell also holds a Trusted Partner status (hereinafter: “TP”) with both Meta and TikTok, which enables prioritized reporting pathways and higher removal responsiveness. As a result, the removal rates for Facebook, Instagram, and TikTok reflect the experience of a TP rather than a typical user. Removal data for X and YouTube, where CyberWell does not have the same status, reflects general user enforcement outcomes.

 

Notable Findings:

  • Most violative content remained online.[1] Only 23.6% of violative posts were removed in real time, and just 37.8% had been removed by the time of escalating as a TP – both figures are lower than the 50% removal rate documented in CyberWell’s 2024 annual report.
  • X showed the weakest enforcement. The platform removed only 10.4% of violative posts in real time and just 27% at the time of this report’s publication, despite hosting the largest share of antisemitic content during election periods. This suggests that X is not effectively addressing election-related hate content.
  • Meta showed relatively consistent performance. Meta removed 53.8% of violative content in real time and 58.5% by the time of escalating content as a TP, leaving over 40% of harmful posts online. However, these rates suggest that Meta’s systems may be more responsive than others in detecting and removing antisemitic content within election-related contexts.
  • TikTok demonstrated strong follow-up enforcement. By the time of writing this report, 63.6% of violative posts had been removed. While the sample size from TikTok was small – partly due to limited search capabilities – this could indicate either lower prevalence of election-related antisemitism, or more effective moderation on the platform.
  • YouTube showed limited removals but very low exposure. Its current removal rate is just 33%, but it accounted for only 1.5% of the data set. This may suggest that YouTube’s moderation systems are effectively limiting the presence or visibility of election-related antisemitism.

_________________________________________________________________________________

[1] CyberWell reported all posts in this dataset that were found to violate the platforms’ policies.

IHRA Classification

Page 9

IHRA Classification

As previously mentioned, CyberWell uses the IHRA Working Definition of Antisemitism to classify content, including both traditional antisemitic tropes and contemporary examples—particularly those related to the State of Israel or comparisons between Israeli policy to that of Nazi Germany.

Of the 11 examples featured in the working definition, 10 appeared in the dataset collected across all four elections. Notably, IHRA Example 2 was the most widespread, appearing in 91.4% of all posts.

This example is defined as:

“Making mendacious, dehumanizing, demonizing, or stereotypical allegations about Jews as such or the power of Jews as collective — such as, especially but not exclusively, the myth about a world Jewish conspiracy or of Jews controlling the media, economy, government or other societal institutions”.

IHRA Example 2 represents the foundation of all major platforms’ hate speech policies, as it targets Jews, a “protected group” through religion, by means of generalizations and conspiracies – criteria that typically violate community guidelines. As such, it is reasonable to expect consistent and strong enforcement against such content. In practice, however, enforcement of election-related hate speech—even when it violates platform policies—appears to be significantly lower than average enforcement rates against hate content across platforms.

 

NOTE: Not every post deemed antisemitic according to the IHRA definition necessarily violates social media platforms’ policies and community guidelines. For example, IHRA examples 7-10 often involve antisemitic content directed towards the State of Israel, which is not considered a protected category under platform rules. In contrast, content aligned with other examples, like IHRA example 2, targeting Jews, as a protected group, typically violates platform guidelines in most cases.

Narrative Patterns in Election-Related Antisemitism

Page 10

Narrative Patterns in Election-Related Antisemitism

Antisemitic content during the election periods across the U.K., U.S., Canada, and Australia followed recognizable patterns rooted in conspiracy, demonization, and cultural erasure.

The top four antisemitic narratives and tropes in this dataset were:

  1. Jewish Political Control
  2. Jews Are “Evil”
  3. Offensive Generalizations Against Jews
  4. Rothschild Conspiracy Theory

Jewish Political Control

The most dominant narratives in this dataset focused on claims of Jewish world domination and political control. This narrative reflects a classic antisemitic trope that portrays Jews as secretly manipulating governments, political leaders, and the electoral process.

In the tweet below, this user alleges Jewish control of the U.S. election. The image depicts fabricated election results with "Jews" winning all 538 electoral votes, whereas Donald Trump and Kamala Harris are shown receiving none.

Jews are “Evil”

This common antisemitic trope portrays Jews as inherently evil or associating them with Satan or demonic imagery. In this dataset, CyberWell observed this theme frequently.

In the TikTok video below, the user reinforces this narrative by showing a silhouette of the devil overlaid with a Star of David.

Offensive Generalizations Against Jews

The third most common trend is a category defined by CyberWell as offensive generalizations, which includes content that features antisemitic rhetoric and general slurs toward Jews, without invoking a specific, well-known antisemitic trope (e.g., economic control, media control, etc.). These posts rely on broad generalizations that attribute negative traits to Jews as an entire group, such as labeling all Jews as “untrustworthy” or “deceitful”.

The tweet below captures the essence of harmful generalizations against Jews. Posted in the lead-up to the Canadian elections, this tweet targets Rachel Bendayan, a Jewish Canadian member of Parliament and the previous Minister of Immigration, Refugees and Citizenship. The phrase “J3w nose spotted” relies on an antisemitic physical stereotype characterizing Jews as having exaggerated or distinct facial features—specifically large or “hooked” noses. The accompanying phrase “Canada is cooked” suggests that the presence of a Jewish figure in government leadership is a sign of national ruin.

The Rothschild Conspiracy Theory

The Rothschild family conspiracy remains one of the most prominent forms of antisemitism found on social media. The Rothschilds – a well-known Jewish banking family – are often used as a symbol of an alleged global Jewish conspiracy. Conspiracy theories involving the Rothschild family appeared consistently across all four elections in this dataset.

Historically, this conspiracy theory has centered around claims that the wealthy family seeks global domination through financial control. In recent years, it has expanded to blame the Rothschilds for a wide range of societal problems, international conflicts, and global crises. Today, it continues to be one of the most widespread and recognizable forms of antisemitism circulating on social media platforms.

In the Facebook post below, the user invokes this trope to allege that the Australian government is controlled by Jews, specifically framing it as “Zionist” influence.

Common Ways Antisemitic Narratives Are Expressed

Page 11

Common Ways Antisemitic Narratives Are Expressed

In election-related posts, antisemitic narratives are often embedded through coded language, symbols, and imagery. These techniques allow users to spread hate while evading content moderation systems.

Emoji Use and Symbolic Cues

Users frequently rely on emoji combinations to signal antisemitic meaning. Across all four elections, commonly used emojis included:

  • 🥸, ✡️, or 🕎 – These emojis were used to mock Jews or imply manipulation, while referencing Jewish religious and cultural identity.
  • 🔻- The inverted red triangle is a symbol popularized in Hamas propaganda to mark Jewish and Israeli individuals as targets. It often appears in posts calling for violence or promoting conspiracies.
  • 👹- This devil-like emoji is commonly paired with phrases like “Synagogue of Satan”, linking Jews to demonic imagery and reinforcing dehumanizing narratives.

These visual markers serve as coded signals for like-minded users, allowing antisemitic messages to circulate without the explicit use of slurs.

The following tweet includes the 🥸 emoji, which echoes Nazi-era caricatures that depicted Jews with exaggerated facial features and glasses. The user employs cultural mockery — using the phrase “Oy Vey”— to insinuate Jewish control over the Canadian election.

Hashtags and Recurring Tropes

While no single hashtag dominated the dataset, older antisemitic tropes frequently resurfaced in election-related content. Hashtags like #Khazars and #SynagogueofSatan were used to revive longstanding conspiracies, including claims that Jews are imposters or inherently evil. These hashtags often appeared alongside contemporary political commentary, creating a false veneer of legitimacy or relevance.

 

Repeated Visual Themes

Stereotypical visuals appeared across all four election cycles. These included:

  • Photoshopped images of politicians – especially Jewish or pro-Israel figures –depicted as Orthodox Jews, often with exaggerated features or sinister expressions.

  • Memes portraying Jews as puppet-masters controlling political parties, sometimes recycled across national contexts. One such image circulated during the U.S. election later appeared in Canada with only minor text changes.

“Zionist” as a Proxy Term

Across all four elections analyzed by CyberWell, the term “Zionist” was frequently used — either as a proxy for Jews, or as a slur disguised as political critique. Antisemitic users online generally use this term to evade content moderation without explicitly using the word “Jew”.

In the tweet below, the user utilizes vulgar language to note that Zionists, used here as a proxy for Jews, control Canadian electoral candidates.

In the Facebook post below, the user shortens the term “Zionist” to “Zio,” likely as another tactic evade content moderation. The word “Zio” is not just an abbreviation of the word “Zionist”. Originating with white supremacists, it is now used online as an antisemitic slur targeting Jews. Furthermore, Meta has recognized the term “Zionist” or “Zio” as a proxy for Jews in the context of hate speech.

Creative Misspellings to Evade Moderation

To bypass detection systems, users often altered the spelling of key terms. Common examples for “Jew” included:

  • “Joos” – a phonetic misspelling designed to evade keyword filters.
  • “J3w” – A variation using number substitution to mask the original term

These variations are widely understood within fringe communities and ensure that harmful content remains online longer than if slurs were used directly and flagged by moderation systems.

Platform Policies and Enforcement

Page 12

Platform Policies and Enforcement

The five platforms analyzed in this dataset – Facebook, Instagram, TikTok, X, and YouTube – all maintain community guidelines prohibiting hate speech, including certain forms of antisemitism. However, enforcement varies significantly, especially when the antisemitic content is framed around political figures or election-related discourse.

 

1. Permissive Approach | X

X’s Civic Integrity Policy explicitly states that, in the absence of other policy violations,  false or controversial political content does not inherently violate its guidelines. The policy notes:

“Not all false or untrue information about politics or civic processes constitutes manipulation or interference. In the absence of other policy violations, the following are generally not in violation of this policy:

  • inaccurate statements about an elected or appointed official, candidate, or political party;
  • organic content that is polarizing, biased, hyperpartisan, or contains controversial viewpoints expressed about elections or politics […]”.

As an organization specializing in antisemitism and digital platform policy, CyberWell reviewed X’s rules on hateful conduct and civic integrity. They concluded that election-related content spreading antisemitic conspiracy theories or targeting Jews violates X’s Hateful Conduct policy. Therefore, in cases where there is a contradiction between the Hateful Conduct policy and the Civic Integrity policy (such as “Dutton is nothing more than another jew puppet” and “Carney is a foreign agent of Jew evil”, etc.) the content should be actioned with removal or limitation of visibility, according to X’s rules.

However, CyberWell’s data indicates that this is not happening in practice. As reflected earlier in this report, X hosted the highest volume of antisemitic content in this dataset (66.3%) on its platform. Yet only 27% of these posts have been removed to date.

The large volume, combined with a low removal rate may indicate that X’s moderation team is applying the Civic Integrity Policy in a way that permits antisemitic content when tied to politics. It could also reflect weak enforcement of the Hateful Conduct Policy, or a failure to apply it to election-based antisemitic content altogether. Regardless of the reason, the fact that large amounts of blatant antisemitic content remain online is unacceptable and calls for a thorough internal review of X’s enforcement practices.

 

2. Middle-Ground Approach | Meta and TikTok

Meta and TikTok both include written policies on election misinformation, but neither platform provides a specific policy section on posts that combine election-related content with hateful tropes, dehumanization, or harmful conspiracy theories.[1]

One likely explanation is that Meta and TikTok treat election-related hate speech as part of their general hate speech policy. This aligns with CyberWell’s dataset, which shows fewer antisemitic election-related posts on these platforms compared to X.

As an organization specializing in digital policy, CyberWell recommends that both platforms adopt a clear, targeted policy for election-related hate speech. Such a policy would give moderators better guidance during sensitive political periods and help ensure violative content is removed more consistently.

 

3. Explicit Election-Related Hate Speech Approach | YouTube

YouTube is currently the only major platform with a policy that directly acknowledges the intersection of elections and hate speech. Under its Election Misinformation Policy, YouTube specifies:

“Content that contains external links to material that would violate our policies and can cause a serious risk of egregious harm, like misleading or deceptive content relating to an election, hate speech targeting protected groups, or harassment targeting election workers, candidates, or voters. This can include clickable URLs, verbally directing users to other sites in a video, and other forms of link-sharing”.

This policy reflects a more proactive approach – and from what CyberWell is able to observe, seems to work in practice. In our dataset, YouTube accounted for only 1.5% of all antisemitic election-related content, the lowest share among the five platforms analyzed.

By explicitly recognizing that election-related videos may contain blatant hate speech, including antisemitism, YouTube has taken a clearer stance in addressing this issue explicitly in their election policies. This likely contributes to stronger enforcement and may indicate why YouTube is comparatively more effective at removing such content.

________________________________________________________________

[1] Meta’s Hateful Conduct policy only mentions elections in one specific context, which relates to content about the ideas, practices, and institutions of a protected group. While such content according to Meta's Community Standards violates their policy in certain situations, they mention that during election times they may adopt a stricter directive on their part. According to Meta, the following is prohibited: “[…] Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic. Meta looks at a range of signs to determine whether there is a threat of harm in the content. These include but are not limited to: […] whether there is a period of heightened tension such as an election […]. In this context, Meta makes a distinction between content directly targeting individuals in a protected group, which violates its Hateful Conduct Policy towards institutions, customs and beliefs of the protected group that violates its Hateful Conduct Policy only in certain situations. However, even in this specific statement, Meta does not refer to election-related content but only to general hate speech content during elections. Thus, Meta, like TikTok, does not present a comprehensive policy on how to treat all types of election-related hate speech content.

Recommendations

Page 13

Recommendations

After publishing four individual reports analyzing the U.K., U.S., Canadian, and Australian elections between 2024 and 2025, CyberWell has identified a recurring and concerning trend: the persistent spread of harmful antisemitic content across major social media platforms during election campaigns. To address this issue more effectively, CyberWell recommends platforms consider the following actions:

 

 

  1. Develop Clear Election-Specific Hate Speech Guidelines

Meta, TikTok, and X should adopt explicit policies addressing hate speech in the context of elections — similar to YouTube’s approach – especially when such content targets protected groups and minorities, including Jews.[1] CyberWell’s findings show that YouTube has been the most successful platform in addressing election-related hate speech, due in part to the specificity of its policy. Ambiguity in enforcement, particularly around posts referencing politicians or political systems, has allowed antisemitic tropes to spread unchecked under the guise of political critique. Thus, the more specific the policy, the more likely it is to be enforced in practice.

 

  1. Recognize and Flag Key Antisemitic Tropes and Language

CyberWell advises platforms to train moderators to recognize recurring IHRA Example 2 narratives, which appeared in 91.4% of analyzed posts. These include claims of Jewish world or government-specific domination, “Zionist control”, and narratives blaming Jews for political or social decline.

 

  1. Improve Detection of Coded Language, Emojis, and Visuals

Antisemitic posts commonly evade content moderation by using indirect or coded language, emojis or imagery. CyberWell has shared specific emoji combinations, visuals, and deliberate misspellings with the platforms that antisemites use to avoid detection.

 

  1. Prioritize Context-Aware Moderation During Elections

Platforms should give similar prioritization and sensitivities to posts containing election-related hate speech as they do to those containing election-related misinformation. Both types of content can cause significant harm and influence rising tensions during election time. Applying consistent moderation standards across these categories would ensure that hate speech is addressed with similar resources and urgency during sensitive electoral periods as other policy-violating content.

_________________________________________________________

[1] As highlighted in our 2024 Annual Report, CyberWell found that social media platforms remove Holocaust denial and distortion content at much higher rates than October 7 denial. This is likely because most platforms have explicit rules against Holocaust denial but had not yet created similar rules for October 7. In the same way, while some platforms mention elections in their policies, the more specific the policy, the more consistently it will be enforced.

Appendix

Page 14

Appendix

This appendix provides several examples of each antisemitic narrative discussed in the report. The selected examples illustrate patterns identified across numerous posts in the dataset. Monitoring these patterns on the platforms may help identify additional posts that follow the same themes.

Jewish Political Control

Jews Are “Evil”

Offensive Generalizations Against Jews

The Rothschild Conspiracy Theory

Page 15

Share this content

Facebook
X
LinkedIn
Email
WhatsApp

More Reports

Antisemitic Incitement & Hate Online | Aftermath...
Following the horrific and fatal shooting at the Capital Jewish Museum in Washington, D.C., in May of 2025, CyberWell’s team…

May 27, 2025

CyberWell Policy Recommendations | Regarding Meta Oversight...
Following Meta’s Oversight Board review of new cases involving symbols adopted by dangerous organizations, CyberWell analyzed data around the use…

February 27, 2025

Monetized Antisemitism on YouTube
Leading up to the 20th anniversary of the founding of YouTube (February 14, 2025), CyberWell explores the concerning monetization of…

February 13, 2025