National elections are widely recognized as pivotal moments in any democracy. They are periods marked by intensified public discourse, heightened political competition, and elevated social tensions. Unfortunately, as these conditions and tensions are mirrored online, particularly on social media, they have also become periods of increased spreading of anti-Jewish rhetoric on social media.
Between late 2023 to mid-2025, CyberWell monitored online discourse surrounding national elections in four major Western democracies. These included the United Kingdom (U.K.), the United States (U.S.), Canada, and Australia. In doing so, CyberWell identified a clear pattern: increased levels of online antisemitism during elections, related to elections, power, and conspiracy theories about Jewish control of governments.
Election-related antisemitism often centers around the claim that Jewish individuals or groups control political candidates, governments, and electoral systems. Across the four countries analyzed in this report, antisemitic narratives often portrayed politicians as “puppets” of Jewish interests, invoking longstanding conspiracy theories about Jewish control over the media and financial institutions. In many cases, Jews were also scapegoated for certain societal or political developments leading up to Election Day. Such tropes are rooted in a long history of antisemitism, which are resurfaced and amplified during politically charged periods.
This report offers a cross-country analysis of how antisemitism dominated major social media platforms during four national elections. It begins with a brief overview of online antisemitism and hate speech during the period leading up to and during elections, incorporating relevant academic perspectives on the topic. Next, it presents a detailed examination of the dataset used in the analysis, the top narratives prominent across all four electoral cycles, as well as the removal rates of election-related antisemitism and how it is aligned with the platform’s approach on this matter. The report concludes with practical recommendations for improving content moderation, strengthening policy implementation, and guiding future monitoring efforts during upcoming elections.
CyberWell is a non-profit organization dedicated to eradicating online antisemitism through driving the enforcement and improvement of community standards and hate speech policies across social media platforms. Using data-driven monitoring, CyberWell identifies where policies are either not enforced, or insufficient to protect Jewish users from harassment and hate.
The organization’s unique methodology combines antisemitic keyword detection, a specialized dictionary based on the International Holocaust Remembrance Alliance’s (IHRA) definition of antisemitism, and human review. CyberWell’s professional analysts – trained in antisemitism, linguistics, and digital policy – vet each piece of content in relation to the IHRA definition and platform policies. For more about our methodology, check out our policy guidelines.
CyberWell currently monitors Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube in both English and Arabic. The organization serves as a trusted flagger for both Meta (Facebook, Instagram, & Threads) and TikTok, enabling it to escalate policy-violating content and advise content moderation teams directly. As part of its commitment to democratize data, CyberWell compiled the first ever open data platform of online antisemitic content in May 2022.
Election periods often coincide with increased political expression across social media platforms. While criticism of politicians and public figures is expected during these times, such discourse can also serve as a vehicle for hate speech and extremism, particularly when the hate speech targets protected groups, including Jewish communities. Antisemitism is frequently disingenuously framed as political critique, adding an additional layer of complication for platforms to effectively detect and remove harmful content.
Additionally, academic research has underscored how often the boundary between political discourse and hate speech is crossed online. A 2022 study[1] of antisemitism on Twitter (now X) investigated “[…] how antisemitic speech manifested itself in political discourse on Twitter in the lead-up to the 2018 US midterm elections”.[2] The study found that longstanding antisemitic tropes, such as claims that Jews control the media or political systems, were regularly embedded in broader political commentary. As the authors note: “[…] age-old antisemitic conspiracy theories concerning Jewish ‘puppeteers’ are rehashed and utilized to express hatred of Jewish people […]”.[3] This tactic allowed such content to evade moderation.
Another study,[4] analyzing Italy’s 2022 general election observed a “[…] prevalence of toxic and hateful interactions among Twitter users in the context of the 2022 Italian election”.[5] The study revealed that a significant portion of political posts contained hate speech, with religious and ethnic groups among the primary targets. As the authors state: “[…] religious and ethnic groups were often the target of hate speech, and these messages were frequently framed as part of mainstream political discourse, making them more difficult to detect and moderate […]”.[6] Together these findings highlight the tendency of hateful narratives to be masked as legitimate mainstream political discourse, particularly during election periods.
This overlap between political and hateful content presents a clear challenge for content moderation— one that CyberWell has identified repeatedly while monitoring election-related antisemitism across multiple countries. Posts that refer to politicians or political movements may still violate hate speech policies, especially when they include dehumanizing language, conspiratorial framing, or harmful generalizations — such as attributing undue influence or negative traits to Jews as a whole. While platforms recognize election periods as sensitive, they tend to focus their monitoring and enforcement efforts primarily on misinformation,[1] with little attention allocated to election-related hate speech. Given CyberWell’s findings, and the record-breaking increase of violent antisemitism, particularly against Jews in Western democracies, addressing hate speech during electoral cycles should be an equal priority.
________________________________________________________________________________________________
Between December 1, 2023, to April 22, 2025, CyberWell analyzed antisemitic election-related content across four national elections: the U.K., U.S., Canada, and Australia. A total of 338 posts were verified as antisemitic on this topic among five major platforms: Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube. The content, in both English and Arabic, was collected using CyberWell’s methodology which includes AI-powered detection, keyword-based tracking, and manual review by trained analysts. All content was evaluated using the IHRA Working Definition of Antisemitism.
The sample was distributed across platforms as follows, with percentages representing the share of total posts collected:
X | 66.3%
Facebook | 25.7%
TikTok | 3.6%
Instagram | 3%
YouTube | 1.5%
__________________________________
This distribution reflects the relative prominence of each platform in the context of election-related antisemitism, with X alone accounting for two-thirds of all recorded posts. This underscores X’s outsized role in spreading election-related antisemitic narratives across social media.

The breakdown below shows the number of posts associated with each country’s election:
U.S | 38.8%
Canada | 25.4%
Australia | 23.7%
UK | 12.1%

Out of the 338 antisemitic posts analyzed across the four national elections, over 3.5 million views and more than 229,062 user interactions[1] were recorded across five major social media platforms. This level of engagement underscores the extent to which antisemitic narratives were not only present but actively circulated and consumed during these election periods.
Below is a breakdown of the total engagement observed:
Comments | 9,642
Likes | 174,269
Shares & Retweets[2] | 45,151
Views[3] | 3,553,353
High engagement rates indicate that antisemitic posts are not only being shared during election time but are also reaching – and resonating with – large audiences during democratic elections. This underscores the urgency for platforms to detect and moderate violative content early, especially when such content is strategically framed to evade moderation or go viral.
Notably, the overwhelming majority of views and engagement in the sample stem from U.S. election-related content. Although this content represents 38.8% of the entire dataset, it accounts for 90.1% of total views and 85.5% of total engagement— a disproportionately high level of interaction, compared to election content from other countries. This aligns with CyberWell’s broader observations that antisemitism often gains greater traction in response to real-world events linked to the U.S. This is likely due to the fact that increasingly openly antisemitic users and ‘influencers’ who post in English, generally focus on U.S. and Israeli developments rather than events in other countries.
________________________________________________________________
CyberWell documented the availability status of each antisemitic post at two distinct points in time:
NOTE: CyberWell’s methodology includes a multi-week window to account for moderation lag before publishing removal rates in prior reports.
These two benchmarks provide a comparative view of platform responsiveness—both during and after the election periods.
NOTE: Removal rate data was not recorded in real time for the U.K. election in 2024. As a result, real-time removal rates reflect only the 297 posts collected from Australia, Canada, and the U.S.. The current removal rates, however, reflect all 338 posts, including the U.K.. For transparency, the tables below list the adjusted and total post counts per platform.
Excluding U.K. data (297 total posts)

Including U.K. data (338 total posts)

It is important to note that CyberWell also holds a Trusted Partner status (hereinafter: “TP”) with both Meta and TikTok, which enables prioritized reporting pathways and higher removal responsiveness. As a result, the removal rates for Facebook, Instagram, and TikTok reflect the experience of a TP rather than a typical user. Removal data for X and YouTube, where CyberWell does not have the same status, reflects general user enforcement outcomes.
_________________________________________________________________________________
[1] CyberWell reported all posts in this dataset that were found to violate the platforms’ policies.
As previously mentioned, CyberWell uses the IHRA Working Definition of Antisemitism to classify content, including both traditional antisemitic tropes and contemporary examples—particularly those related to the State of Israel or comparisons between Israeli policy to that of Nazi Germany.
Of the 11 examples featured in the working definition, 10 appeared in the dataset collected across all four elections. Notably, IHRA Example 2 was the most widespread, appearing in 91.4% of all posts.
This example is defined as:
“Making mendacious, dehumanizing, demonizing, or stereotypical allegations about Jews as such or the power of Jews as collective — such as, especially but not exclusively, the myth about a world Jewish conspiracy or of Jews controlling the media, economy, government or other societal institutions”.
IHRA Example 2 represents the foundation of all major platforms’ hate speech policies, as it targets Jews, a “protected group” through religion, by means of generalizations and conspiracies – criteria that typically violate community guidelines. As such, it is reasonable to expect consistent and strong enforcement against such content. In practice, however, enforcement of election-related hate speech—even when it violates platform policies—appears to be significantly lower than average enforcement rates against hate content across platforms.
NOTE: Not every post deemed antisemitic according to the IHRA definition necessarily violates social media platforms’ policies and community guidelines. For example, IHRA examples 7-10 often involve antisemitic content directed towards the State of Israel, which is not considered a protected category under platform rules. In contrast, content aligned with other examples, like IHRA example 2, targeting Jews, as a protected group, typically violates platform guidelines in most cases.
Antisemitic content during the election periods across the U.K., U.S., Canada, and Australia followed recognizable patterns rooted in conspiracy, demonization, and cultural erasure.
The top four antisemitic narratives and tropes in this dataset were:
The most dominant narratives in this dataset focused on claims of Jewish world domination and political control. This narrative reflects a classic antisemitic trope that portrays Jews as secretly manipulating governments, political leaders, and the electoral process.
In the tweet below, this user alleges Jewish control of the U.S. election. The image depicts fabricated election results with "Jews" winning all 538 electoral votes, whereas Donald Trump and Kamala Harris are shown receiving none.

This common antisemitic trope portrays Jews as inherently evil or associating them with Satan or demonic imagery. In this dataset, CyberWell observed this theme frequently.
In the TikTok video below, the user reinforces this narrative by showing a silhouette of the devil overlaid with a Star of David.

The third most common trend is a category defined by CyberWell as offensive generalizations, which includes content that features antisemitic rhetoric and general slurs toward Jews, without invoking a specific, well-known antisemitic trope (e.g., economic control, media control, etc.). These posts rely on broad generalizations that attribute negative traits to Jews as an entire group, such as labeling all Jews as “untrustworthy” or “deceitful”.
The tweet below captures the essence of harmful generalizations against Jews. Posted in the lead-up to the Canadian elections, this tweet targets Rachel Bendayan, a Jewish Canadian member of Parliament and the previous Minister of Immigration, Refugees and Citizenship. The phrase “J3w nose spotted” relies on an antisemitic physical stereotype characterizing Jews as having exaggerated or distinct facial features—specifically large or “hooked” noses. The accompanying phrase “Canada is cooked” suggests that the presence of a Jewish figure in government leadership is a sign of national ruin.

The Rothschild family conspiracy remains one of the most prominent forms of antisemitism found on social media. The Rothschilds – a well-known Jewish banking family – are often used as a symbol of an alleged global Jewish conspiracy. Conspiracy theories involving the Rothschild family appeared consistently across all four elections in this dataset.
Historically, this conspiracy theory has centered around claims that the wealthy family seeks global domination through financial control. In recent years, it has expanded to blame the Rothschilds for a wide range of societal problems, international conflicts, and global crises. Today, it continues to be one of the most widespread and recognizable forms of antisemitism circulating on social media platforms.
In the Facebook post below, the user invokes this trope to allege that the Australian government is controlled by Jews, specifically framing it as “Zionist” influence.

In election-related posts, antisemitic narratives are often embedded through coded language, symbols, and imagery. These techniques allow users to spread hate while evading content moderation systems.
Users frequently rely on emoji combinations to signal antisemitic meaning. Across all four elections, commonly used emojis included:
These visual markers serve as coded signals for like-minded users, allowing antisemitic messages to circulate without the explicit use of slurs.
The following tweet includes the 🥸 emoji, which echoes Nazi-era caricatures that depicted Jews with exaggerated facial features and glasses. The user employs cultural mockery — using the phrase “Oy Vey”— to insinuate Jewish control over the Canadian election.

While no single hashtag dominated the dataset, older antisemitic tropes frequently resurfaced in election-related content. Hashtags like #Khazars and #SynagogueofSatan were used to revive longstanding conspiracies, including claims that Jews are imposters or inherently evil. These hashtags often appeared alongside contemporary political commentary, creating a false veneer of legitimacy or relevance.
Stereotypical visuals appeared across all four election cycles. These included:




Across all four elections analyzed by CyberWell, the term “Zionist” was frequently used — either as a proxy for Jews, or as a slur disguised as political critique. Antisemitic users online generally use this term to evade content moderation without explicitly using the word “Jew”.
In the tweet below, the user utilizes vulgar language to note that Zionists, used here as a proxy for Jews, control Canadian electoral candidates.

In the Facebook post below, the user shortens the term “Zionist” to “Zio,” likely as another tactic evade content moderation. The word “Zio” is not just an abbreviation of the word “Zionist”. Originating with white supremacists, it is now used online as an antisemitic slur targeting Jews. Furthermore, Meta has recognized the term “Zionist” or “Zio” as a proxy for Jews in the context of hate speech.

To bypass detection systems, users often altered the spelling of key terms. Common examples for “Jew” included:

These variations are widely understood within fringe communities and ensure that harmful content remains online longer than if slurs were used directly and flagged by moderation systems.
The five platforms analyzed in this dataset – Facebook, Instagram, TikTok, X, and YouTube – all maintain community guidelines prohibiting hate speech, including certain forms of antisemitism. However, enforcement varies significantly, especially when the antisemitic content is framed around political figures or election-related discourse.
X’s Civic Integrity Policy explicitly states that, in the absence of other policy violations, false or controversial political content does not inherently violate its guidelines. The policy notes:
“Not all false or untrue information about politics or civic processes constitutes manipulation or interference. In the absence of other policy violations, the following are generally not in violation of this policy:
As an organization specializing in antisemitism and digital platform policy, CyberWell reviewed X’s rules on hateful conduct and civic integrity. They concluded that election-related content spreading antisemitic conspiracy theories or targeting Jews violates X’s Hateful Conduct policy. Therefore, in cases where there is a contradiction between the Hateful Conduct policy and the Civic Integrity policy (such as “Dutton is nothing more than another jew puppet” and “Carney is a foreign agent of Jew evil”, etc.) the content should be actioned with removal or limitation of visibility, according to X’s rules.
However, CyberWell’s data indicates that this is not happening in practice. As reflected earlier in this report, X hosted the highest volume of antisemitic content in this dataset (66.3%) on its platform. Yet only 27% of these posts have been removed to date.
The large volume, combined with a low removal rate may indicate that X’s moderation team is applying the Civic Integrity Policy in a way that permits antisemitic content when tied to politics. It could also reflect weak enforcement of the Hateful Conduct Policy, or a failure to apply it to election-based antisemitic content altogether. Regardless of the reason, the fact that large amounts of blatant antisemitic content remain online is unacceptable and calls for a thorough internal review of X’s enforcement practices.
Meta and TikTok both include written policies on election misinformation, but neither platform provides a specific policy section on posts that combine election-related content with hateful tropes, dehumanization, or harmful conspiracy theories.[1]
One likely explanation is that Meta and TikTok treat election-related hate speech as part of their general hate speech policy. This aligns with CyberWell’s dataset, which shows fewer antisemitic election-related posts on these platforms compared to X.
As an organization specializing in digital policy, CyberWell recommends that both platforms adopt a clear, targeted policy for election-related hate speech. Such a policy would give moderators better guidance during sensitive political periods and help ensure violative content is removed more consistently.
YouTube is currently the only major platform with a policy that directly acknowledges the intersection of elections and hate speech. Under its Election Misinformation Policy, YouTube specifies:
“Content that contains external links to material that would violate our policies and can cause a serious risk of egregious harm, like misleading or deceptive content relating to an election, hate speech targeting protected groups, or harassment targeting election workers, candidates, or voters. This can include clickable URLs, verbally directing users to other sites in a video, and other forms of link-sharing”.
This policy reflects a more proactive approach – and from what CyberWell is able to observe, seems to work in practice. In our dataset, YouTube accounted for only 1.5% of all antisemitic election-related content, the lowest share among the five platforms analyzed.
By explicitly recognizing that election-related videos may contain blatant hate speech, including antisemitism, YouTube has taken a clearer stance in addressing this issue explicitly in their election policies. This likely contributes to stronger enforcement and may indicate why YouTube is comparatively more effective at removing such content.
________________________________________________________________
[1] Meta’s Hateful Conduct policy only mentions elections in one specific context, which relates to content about the ideas, practices, and institutions of a protected group. While such content according to Meta's Community Standards violates their policy in certain situations, they mention that during election times they may adopt a stricter directive on their part. According to Meta, the following is prohibited: “[…] Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic. Meta looks at a range of signs to determine whether there is a threat of harm in the content. These include but are not limited to: […] whether there is a period of heightened tension such as an election […]. In this context, Meta makes a distinction between content directly targeting individuals in a protected group, which violates its Hateful Conduct Policy towards institutions, customs and beliefs of the protected group that violates its Hateful Conduct Policy only in certain situations. However, even in this specific statement, Meta does not refer to election-related content but only to general hate speech content during elections. Thus, Meta, like TikTok, does not present a comprehensive policy on how to treat all types of election-related hate speech content.
After publishing four individual reports analyzing the U.K., U.S., Canadian, and Australian elections between 2024 and 2025, CyberWell has identified a recurring and concerning trend: the persistent spread of harmful antisemitic content across major social media platforms during election campaigns. To address this issue more effectively, CyberWell recommends platforms consider the following actions:
Meta, TikTok, and X should adopt explicit policies addressing hate speech in the context of elections — similar to YouTube’s approach – especially when such content targets protected groups and minorities, including Jews.[1] CyberWell’s findings show that YouTube has been the most successful platform in addressing election-related hate speech, due in part to the specificity of its policy. Ambiguity in enforcement, particularly around posts referencing politicians or political systems, has allowed antisemitic tropes to spread unchecked under the guise of political critique. Thus, the more specific the policy, the more likely it is to be enforced in practice.
CyberWell advises platforms to train moderators to recognize recurring IHRA Example 2 narratives, which appeared in 91.4% of analyzed posts. These include claims of Jewish world or government-specific domination, “Zionist control”, and narratives blaming Jews for political or social decline.
Antisemitic posts commonly evade content moderation by using indirect or coded language, emojis or imagery. CyberWell has shared specific emoji combinations, visuals, and deliberate misspellings with the platforms that antisemites use to avoid detection.
Platforms should give similar prioritization and sensitivities to posts containing election-related hate speech as they do to those containing election-related misinformation. Both types of content can cause significant harm and influence rising tensions during election time. Applying consistent moderation standards across these categories would ensure that hate speech is addressed with similar resources and urgency during sensitive electoral periods as other policy-violating content.
_________________________________________________________
[1] As highlighted in our 2024 Annual Report, CyberWell found that social media platforms remove Holocaust denial and distortion content at much higher rates than October 7 denial. This is likely because most platforms have explicit rules against Holocaust denial but had not yet created similar rules for October 7. In the same way, while some platforms mention elections in their policies, the more specific the policy, the more consistently it will be enforced.
This appendix provides several examples of each antisemitic narrative discussed in the report. The selected examples illustrate patterns identified across numerous posts in the dataset. Monitoring these patterns on the platforms may help identify additional posts that follow the same themes.








© 2025 CyberWell. All Rights Reserved.
Powered by Webstick