Bondi Beach Terror Attack: How Social Media Globalised a Community’s Trauma

In the aftermath of the Bondi Beach terror attack, social media became a parallel crime scene, glorifying the attack, blaming Jews for their own victimisation, and psychologically traumatising an already suffering community.
8 pages

Executive Summary

Page 1

Executive Summary

The Bondi Beach attack was a targeted act of antisemitic terrorism that killed fifteen people, including a child, and traumatised Australia’s Jewish community. In its aftermath, social media became a parallel crime scene, glorifying the attack, blaming Jews for their own victimisation, and psychologically traumatising an already suffering community.1

The terrorist attack at Bondi Beach on December 14, 2025, was a targeted act of antisemitic violence that shook Australia and reverberated far beyond its shores. Carried out during a Hanukkah lighting event, the attack deliberately targeted the Bondi Jewish community during a celebration and in a public space meant to be safe. Fifteen people were murdered, including a ten-year-old child, and more than forty others were injured. 

As details emerged, Australia’s prime minister stated that the attack appeared to have been motivated by Islamic State (ISIS) ideology. Police later confirmed that “homemade” Islamic State flags and improvised explosive devices were found in a vehicle used by the attackers. 

Along with the physical and psychological aftermath of the attack, a second and less regulated arena of harm rapidly emerged online. This report examines the digital response to the Bondi Beach attack and identifies dominant and deeply concerning narratives including violence and Conspiratorial Self-Victimisation, which gained major traction in online spaces. 

 

Footnotes:

  1.  This report was prepared by CyberWell’s Australia Desk. As such, it is written using Australian English spelling.

Methodology

Page 2

Methodology

Between December 14, 2025, and December 23, 2025, CyberWell analysed online content relating to the Bondi Beach terror attack. A total of 87 posts and 77 comments were verified as antisemitic across five major platforms: Facebook, Instagram, TikTok, X, and YouTube. Content was identified in English, French, and Arabic using CyberWell’s unique methodology.

This methodology includes identifying antisemitic keywords, applying a specialised dictionary grounded in the International Holocaust Remembrance Alliance’s (IHRA) working definition of antisemitism — expanded to include emergent antisemitic tropes as identified by CyberWell that are not explicitly covered in the working definition — and systematic human review.

 

Platform Distribution

Page 3

Platform Distribution

The data sample was distributed across platforms as follows, with percentages representing each platform’s share of the total posts and comments collected separately. The separation between posts and comments reflects several considerations. First, platforms often treat posts as a more serious form of content than comments, since they involve creating original content rather than simply responding to existing content. Second, posts are generally easier to flag at scale as they tend to be less context dependent. Third, posts are more likely to achieve wide reach or virality. For these reasons, posts and comments were analysed separately to ensure a more accurate and context-sensitive assessment of the dataset. This division is also reflected later in the removal rates section.

 

Views & Engagement

Page 4

Views & Engagement

A substantial level of visibility and user interaction was identified across the 164 antisemitic posts and comments analysed in this dataset from major social media platforms. The data indicated that these posts generated more than 8.1 million views2 and over 255,450 user interactions,3 demonstrating that antisemitic narratives were not only present online, but also actively consumed, amplified, and engaged with following the attack.

It is important to note that, on every platform monitored aside from X, the number of views on comments are not publicly available – the only engagement metrics that can be measured are “likes”. As nearly half of the dataset consists of comments, the true extent of exposure is likely to be substantially higher. Moreover, this analysis reflects a verified data sample and not a comprehensive analysis of all relevant content across all platforms. Therefore, these figures represent only a partial snapshot of what is likely to be much broader and more pervasive harmful online discourse.

 

Footnotes:

2. For Facebook and Instagram this metric is available only for videos/reels.

3.  Total engagement is calculated as a sum of likes, shares, and comments of all posts in the dataset per platform. For X, the number also includes retweets.

Removal Rates

Page 5

Removal Rates

 

Note: Of the 57.58% of content identified on X that remained online, 12.12% were subject to visibility restrictions or were labeled as synthetic or manipulated media.

 

Key Data Findings

Page 6

Key Data Findings

Almost 59% of all content in the dataset was removed – a higher percentage than the 52.4% removal rate documented in CyberWell’s 2025 Annual Report. However, there is a key difference: as detailed below, most content in this dataset involved violence-related narratives, including glorification, celebration, and proactive calls for further attacks. This type of content is generally treated as the most severe category under platform policy frameworks and, unlike many antisemitic narratives that fall under hate speech or bullying/harassment policies, it is typically enforced more consistently.

Beyond violence, another prominent narrative in the dataset was conspiratorial content accusing Jews of orchestrating the attack against themselves, with multiple sub-narratives. This category is generally not addressed in most platform policies, with TikTok as a notable exception. However, while TikTok fully enforced this narrative in posts, enforcement was weaker in the comments section, where it removed only 60% of the reported content.

TikTok also exhibited a significantly higher volume of violent and conspiratorial content compared to previous violent antisemitic incidents, with harmful narratives circulating at scale through visual trends, emojis, and AI-generated imagery. A substantial portion of this content relied on coded gaming language widely consumed by younger users. The combination of high volume, visual formats, and youth-oriented engagement presents a heightened risk of exposure and potential radicalization among young people.

X had the highest share of posts (37.93%) and the lowest removal rate (42.42%). Although X accounted for 20.12% of the total dataset, it accounted for 83.83% of views and 32.1% of total interactions, underscoring X’s outsized role in amplifying this content online.

On Meta, the platform’s Community Standards do not currently classify Conspiratorial Self-Victimisation as policy violating, which is reflected in the disparity between enforcement of violent content and enforcement of this conspiratorial category.

On YouTube, the vast majority of content appeared in comment sections. Across TikTok, Meta, and YouTube, the spread of severe antisemitic content in response to news coverage was a recurring issue, where comment sections attached to news content enabled antisemitic narratives to proliferate with limited moderation.

 

Top Antisemitic Narratives

Page 7

Top Antisemitic Narratives

Violence

In the immediate aftermath of the attack, numerous users shared content that glorified the violence, praised the perpetrator, and sought to justify the attack explicitly on the basis of the victims’ Jewish identity. In some cases, this content also included direct calls for further violence.

 

Glorification of Violence

This content circulated through two main narratives. The first involved the open celebration of the victims’ deaths, with posts expressing approval, mockery, or indifference toward the killing of Jewish civilians. The second emerged as a newly identified gaming trend, documented by CyberWell during its monitoring of online responses to the Bondi Beach attack. This trend repurposed the violence as entertainment, further trivialising the attack and normalising the dehumanisation of Jewish victims.

A particularly disturbing example of this celebratory motif appears in the TikTok videos shown below. In these videos, users display imagery referencing Bondi, the location of the attack against the Jewish community, while a man is shown dancing to music. In Exhibit A, the overlaid text “-14 🧃” and “-1 👶”, references the number of Jewish victims, including a child. The “🧃” emoji functions as a coded reference to Jews, as the word “juice” is used as a phonetic substitute for “Jews” in English-language antisemitic slang. Exhibit B follows the same pattern, with overlaid text stating “-15 🧃”. In both cases, the content conveys celebration and approval of the murder of Jews, using coded symbols and visual cues to trivialise and glorify mass violence.

 

Exhibit A

Exhibit B

 

While monitoring various trends celebrating the Bondi Beach attack, CyberWell also identified a TikTok trend rooted in gaming culture that gained traction following the shooting. In these posts, users frame the real-world attack through Grand Theft Auto (GTA) references – often using captions like “Bondi Beach got GTA Online IRL before GTA 6” – to celebrate and normalise the victims’ deaths. Videos typically show static or slow-moving shots of cars, motorbikes, or bicycles in everyday public spaces, echoing GTA’s aesthetic of turning real cities into settings for casual, exaggerated violence. Users then draw explicit parallels between in-game scenarios and the real-world atrocities at Bondi Beach.

These posts utilise real-world tragedies to ‘memify’ mass murder, trivialise Jewish death, and contribute to the normalisation of antisemitic harm as casual, ironic, and shareable.

In both examples below, the captions blatantly trivialise and mock Jewish deaths. In Exhibit C, the post describes the shooting as “That shooting was random,” accompanied by hashtags such as “#stark,” “#blowup,” “#sydney,” and “#australia”, stripping the attack of context and reducing mass murder to a viral moment. In Exhibit D, the caption adds “#fyp” and “#views is a joke”, explicitly framing the violence as entertainment and signalling indifference to the loss of life.

 

Exhibit C

 

Exhibit D

Praise for the Perpetrator

Another prominent narrative was the explicit praise of the perpetrator and the celebration of the attack itself. In the Arabic-language post below, the user replies to a tweet reporting on the identity of the Sydney Hanukkah shooter by glorifying the attacker. The user states in Arabic: “May God welcome the hero, bless him, raise his status, and accept his deed”. By praising the attacker in religious terms, the user expresses approval of an antisemitic act of violence and frames the perpetrator’s actions as honorable, thereby glorifying violence against Jews.

Justification of the Attack

A further narrative identified was the justification of violence against Jews through collective blame. This content framed the killing of Jewish civilians as a legitimate response to the military actions of the State of Israel, explicitly holding Jews as a group responsible for geopolitical events. In the French-language post below, the user justified the attack by stating (translation): “No, it is not an antisemitic attack. It is an anti-Zionist resistance at the international scale. The terrorists are the Jews who are killing Arab Muslims in Palestine in the name of the Torah, put in place by the Hebrew state supported by Europe and the United States [...]”.

Calls for Further Violence

The most alarming narrative identified online involved explicit calls for further violence. Posts advocating additional harm against Jews appeared predominantly in Arabic and represented a clear escalation from justification or praise to direct incitement. In one such example, an Arabic-language post explicitly called for the death and torture of a victim’s family. The user wrote (translation): “I wish they died and were tortured in the utmost and extreme way with him”.

 

This content goes beyond rhetorical hostility and constitutes an overt call for violence against Jews, a disturbing trend that CyberWell has documented in increasing prevalence and severity. By targeting victims’ families and expressing a desire for extreme physical suffering, this post reflects the normalization of Jew-hatred so severe that it crosses into violent ideation, with users no longer fearing public outcry or condemnation.

 

Conspiratorial Self-Victimisation

The second most dominant narrative identified in the online response to the Bondi Beach terror attack was Conspiratorial Self-Victimisation (CSV). This narrative denies the reality of antisemitic violence and instead claims that Jews orchestrate attacks against themselves for political or strategic gain. Since October 7, this framing has repeatedly surfaced following attacks on Jews and re-emerged prominently after the Bondi Beach shooting.

Rather than recognising Jewish victimhood, these narratives reverse responsibility by casting Jews as perpetrators, minimising the harm caused and legitimising further violence. Three recurring themes were identified. The first claimed that Jews themselves or Israel carried out the attack. The second falsely alleged that the shooter was a former member of the Israeli Defence Forces. The third asserted that the victims were paid actors, denying the authenticity of the deaths altogether.

Together, these conspiracies erase victimhood, absolve attackers, and reinforce antisemitic tropes that normalise violence. Despite their clear role in inciting harm, such narratives remain unaddressed across platforms policies. TikTok is the sole platform that consistently treats this content as violative, classifying it under its harmful misinformation policy.

Examples illustrating each of these three narratives are provided below.

 

Jews/Israel Did It

In the Facebook post below, the user shares a diagram that explicitly accuses Jews of fabricating attacks in order to manipulate public sympathy and justify violence. The user employs the term “Zionist” as a proxy for Jews and uses a fabricated “Zionist Cycle” to claim that Jews “fake a major attack,” exaggerate statistics, and exploit historical atrocities, such as the Holocaust, for personal gain.

 

Another Facebook post advances the Conspiratorial Self-Victimisation narrative by blaming Jews for an attack committed against them. Referring to the Bondi Beach shooting, the user writes, “Another zionist false flag. Pigs!”. In this context, “Zionist” again functions as a proxy for Jews, while the “false flag” claim alleges that Jews orchestrated the violence themselves. This framing denies Jewish victimhood, shifts responsibility onto the targeted community, and is reinforced by the dehumanising insult “Pigs!”.

 

Similarly, in a subsequent post on X, a user shared a video captioned “Another False Flag?”, implying a pattern of fabricated violence attributed to Israel. In accompanying text, the user accused Israel’s intelligence agency, the Mossad, of orchestrating “kosher false flag attacks”, including the Bondi Beach attack, while simultaneously denying the reality of the violence by claiming that “nobody believes” such events. This post both advances antisemitic conspiracy theories and erases the suffering of real victims by treating the attack as a fabrication.

Claims that the Shooter was Ex-IDF

Another antisemitic narrative circulating after the attack falsely framed the perpetrator as a former Israeli Defense Forces (IDF) soldier named “David Cohen.”

Both the Facebook post (exhibit E) and the TikTok video (exhibit F) below promote this claim. In Exhibit E, the user alleges that the Bondi Beach shooting was a “false flag” operation, juxtaposing images of the alleged shooter with a Facebook profile and asserting they depict the same individual. The post identifies the person as “David Cohen,” emphasizes that he is Jewish and from Israel, and suggests that this purported resemblance indicates Jewish/Israeli involvement. Although the user notes the claim is unconfirmed, they repeatedly imply that Israel’s alleged past conduct makes the “false flag” explanation plausible. By asserting that the shooter was Jewish and Israeli, the post shifts responsibility for the attack onto Jews themselves and reinforces antisemitic conspiracy narratives.

Exhibit F advances the same claim in a simplified and highly shareable format. The video states that the “Bondi Beach shooter was X-IDF” and “served in the Israeli army,” accompanied by hashtags designed to increase visibility. This falsely attributes responsibility to Israel and the IDF and reframes an antisemitic attack as an orchestrated event.

 

Exhibit F

 

The post below further reflects this narrative while relying on coded antisemitic symbolism. In the caption, the user writes, “🧃media won’t show you”, using the juice box emoji as a stand-in for Jews to evade moderation. This shorthand draws on antisemitic conspiracy theories alleging Jewish control of the media while continuing to promote blame-shifting and hostility toward Jews.

Accusing Victims of Being Paid Actors

Another prominent antisemitic narrative that circulated after the attack falsely framed one of the victims, human rights lawyer Arsen Ostrovsky, as a “paid actor.” Using AI generated images, users claimed that Ostrovsky had fabricated his injuries, thus denying the reality of the violence and casting a terror victim as a participant in an allegedly manufactured staged event.

This is evident in both exhibits G and H below, where the AI generated image of Ostrovsky circulated widely across platforms alongside claims that it would be “impossible” for him to have survived both the October 7 attack and the Bondi shooting.

 

Exhibit G

 

Exhibit H

 

In the post shown in Exhibit I, the narrative escalates into overt mockery. The user argues that no competent medic would bandage a wound without cleaning it first, implying that the injuries were staged and that the victim is deceptive. The virality of this post is particularly notable, as it received 4.1 million views, significantly amplifying the denial narrative. In Exhibit J, another user reinforces this denial by stating, “who the hell takes a selfie after being shot,” using the victim’s behavior as “evidence” of fabrication and shifting responsibility away from the attacker. The addition of a digitally edited pig nose further dehumanises the victim, intensifying ridicule and reinforcing narratives that erase Jewish victimhood.

Exhibit I

Exhibit J

Scapegoating Jews

A further sub narrative involved the scapegoating of Jews for the abolition of the White Australia policy.4 This framing draws directly from white nationalist and extremist ideology, which portray Jews as covert forces responsible for immigration, multiculturalism, and the erosion of national identity. By attributing a historic government policy decision to Jewish influence, this narrative repackages longstanding antisemitic conspiracy theories within a broader extremist worldview that seeks to legitimise exclusion and resentment while diminishing Jewish victimhood.

The tweet below exemplifies this narrative by framing Jews as a collective enemy and an existential threat. It claims Australia is engaged in a “war for our very survival” and asserts that “jews and their servants” are responsible for importing violent criminals, explicitly linking Jews to the Bondi attack by alleging they “continue to bring in more and more violent criminals like the Bondi Shooters” [emphasis added]. This framing falsely portrays Jews as orchestrators of violence and holds them collectively responsible for acts they did not commit. 

 

Similarly, another tweet draws on conspiratorial antisemitic tropes by highlighting that American reporter Walter Lippmann was Jewish and linking him to the abolition of the White Australia policy. The accompanying image reinforces this narrative through the “Happy Merchant” caricature and the phrase “oy vey, goyim,” mocking Jewish people and implying that Jews manipulate society to engineer social decline.

 

 

Hate Imagery

Finally, in analyzing online content related to the attack and its aftermath, a recurring and deeply concerning pattern emerged: the use of dehumanising antisemitic imagery. Across multiple posts, Jews were repeatedly depicted through classic antisemitic tropes that have circulated for centuries, including portrayals of Jews as pigs or animals, alongside imagery designed to ridicule, degrade, and strip victims of humanity. These depictions function not merely as insults, but as a means of normalising hatred and legitimising hostility by reducing Jews to objects of contempt rather than recognising them as victims of violence.

The tweet below uses pork imagery to dehumanise a Jewish victim and justify his murder. By referring to one of the victims as a “fat pork chip,” the post mocks Jewish religious identity while pairing the insult with blood libel accusations (allegations that Jews have murdered non-Jews in order to use their blood in rituals) and claims that he “got what he was promised 3000 years ago” (a popularised meme used to mock the concept of a divine promise to a Jewish homeland and portraying Jews as greedy, dishonest, or delusional). Together, these elements frame the killing as deserved and reinforce classic antisemitic tropes that excuse violence against Jews.

The tweet below further demonstrates the use of hateful imagery by utilizing a distorted “Happy Merchant” caricature to dehumanise Jews and mock Jewish victims.

 

Footnotes:

4. The White Australia Policy was introduced in 1901 through the Immigration Restriction Act and aimed to exclude non-European migrants in order to maintain a racially white population within Australia. The policy formally ended in 1973 when the Whitlam Government removed race-based criteria from Australia’s immigration laws. For further details, see: https://www.nma.gov.au/defining-moments/resources/end-of-white-australia-policy.

Key Insights

Page 8

Key Insights

Comment Sections as Primary Vectors of Abuse

Antisemitic content appeared both in original posts and in comment sections across platforms. Although the dataset contains a higher number of posts than comments, a consistent pattern that CyberWell observed following violent incidents targeting Jews is the spread of antisemitic narratives within the comment sections of highly viewed posts from news media accounts. These are high-traffic spaces with significant readership, allowing harmful narratives to circulate beneath widely shared and credentialed news media coverage. 

In many cases, the comment sections that CyberWell reviewed were largely unmoderated and quickly became hotspots for violent rhetoric, Conspiratorial Self-Victimisation, and graphic or dehumanising imagery. The same narratives identified in standalone posts were repeatedly echoed and reinforced in comment threads, amplifying harm through repetition, visibility, and engagement. This pattern raises serious concerns about the absence of meaningful oversight in news media comment sections. 

Under current Australian law, social media platforms and news media outlets operating on these platforms are subject to legal obligations aimed at protecting the integrity and safety of online communications, including responsibility for unlawful or harmful content. News media outlets that maintain social media pages may be treated as publishers of third-party comments and can be held liable for defamatory material posted by users, while platforms are required to enforce their community standards and respond to reports of unlawful content. Both platforms and content providers must comply with the Online Safety Act 2021, including removal notices and directions issued by the eSafety Commissioner in relation to serious online abuse or harmful material. These obligations can apply to international media organisations where their content is accessible in Australia and has a sufficient connection to Australian audiences.

Enforcement gaps, however, still remain, highlighting the need for clearer and enforceable moderation expectations, including meaningful consequences for outlets that fail to address harmful content within their comment sections.

 

Use of Emojis as Coded Hate

Across all platforms, emojis were repeatedly used to convey antisemitic messages, most notably pig imagery (see exhibits K and L) used as a coded reference to Jews. This symbolic language was especially prevalent in comment sections and enabled users to evade moderation by avoiding text altogether, while still dehumanising Jewish victims. 

In both examples below, two separate Al Jazeera videos covering the attack include comments that deploy pig imagery to refer to Jews. Exhibit K, is particularly dehumanising, using an image of a pig being roasted to symbolise dead Jewish victims, reducing real human loss to an object of mockery and contempt. These examples illustrate the use of coded language and highlight the challenge of moderating antisemitic content that relies on symbolism rather than explicit slurs.

Exhibit K

Exhibit L

Multilingual Spread and Globalisation of Antisemitism

Another key insight is the significant volume of antisemitic content identified in French and Arabic, reinforcing the fact that antisemitism following the Bondi Beach attack was not confined to Australia. The borderless nature of the internet allows extremist narratives to circulate rapidly across languages and regions, complicating both regulatory enforcement and platform responses. Once amplified, online rhetoric can spread at speed and scale, extending the impact of a local act of violence into a global ecosystem of hate.

For policymakers, this presents a clear warning: national approaches to online safety and content regulation are increasingly insufficient in isolation, as harmful narratives easily cross linguistic and jurisdictional boundaries. Addressing antisemitism online will require stronger international cooperation, clearer platform accountability, and regulatory frameworks that reflect the global reach of digital harm.

Network Analysis

Analysis of accounts on X further reinforces the above insight, demonstrating that online antisemitic content related to the Bondi Beach attack originated from a geographically diverse range of users. This review was conducted only on X due to recent changes to the platform’s geolocation features. While Australia accounted for the largest share of identifiable accounts, a substantial proportion of content originated from overseas locations, most notably the United States, as well as countries across Europe, the Middle East, North Africa, and the Asia-Pacific region (see graph below). This distribution demonstrates that the online response to the attack was not confined to Australia and reflects the transnational nature of antisemitic discourse in digital spaces.

What’s Next?

Analysis of this dataset demonstrates that, while platforms were more likely to remove content that explicitly called for or celebrated violence, conspiratorial narratives, particularly false flag claims and Conspiratorial Self-Victimisation, were largely left unaddressed. These narratives do not sit outside the cycle of violence. They are part of it. By denying victimhood, shifting blame onto Jewish communities, and reframing terrorism as justified or fabricated, such content creates an environment in which further harm becomes easier to excuse and more likely to occur.

Unchecked, these narratives do not remain confined to online spaces. They erode empathy, normalise dehumanization, and contribute to a broader climate in which violence is legitimised. Harm does not end when an attack is over. It continues through the circulation of content that distorts reality, mocks victims, and denies accountability.

In the aftermath of the Bondi Beach attack, former Australian Treasurer, Josh Frydenberg, warned of the real-world consequences of rhetoric left to fester. He stated that “bad things happen when good people stay silent” and cautioned that, “unless our governments, federal and state, take urgent, unprecedented and strong action, as night follows day, we will be back grieving the loss of innocent life in another terrorist attack in our country.” These words reflect the findings of this report. The failure to address conspiratorial and dehumanising antisemitic narratives is not a neutral act. Silence, in this context, enables harm.

The scale of views and interactions seen in this specific attack is not unique. Yet it critically underscores the urgency for platforms to identify and moderate antisemitic content at early stages, especially during moments of heightened public attention following acts of terrorism. If platforms and policymakers continue to treat conspiratorial content as less dangerous than explicit violence, they risk missing the conditions that allow violence to take root in the first place. Meaningful intervention must extend beyond the most overt expressions of harm and confront the narratives that sustain it. Without decisive, coordinated action, the cycle of violence documented here will continue, with devastating consequences for targeted communities and for public safety more broadly.

CyberWell will continue to monitor these harmful trends, calling attention to policy and enforcement gaps, and working with platforms to make our digital spaces safter for Jews – and everyone – everywhere. 

Share this content

Facebook
X
LinkedIn
Email
WhatsApp

More Reports

The State of Online Antisemitism 2025: From Classic Tropes to Event-Driven...
CyberWell’s monitoring of antisemitic content in 2025 identified three primary narrative categories shaping online antisemitic discourse: classic world domination conspiracies,…

January 26, 2026

Denial and Conspiratorial Self-Victimization in Antisemitic Discourse: Analysis of the Online...
CyberWell is tracking a dangerous rise in online narratives that deny attacks on Jews and Israelis or claim they were…

January 8, 2026

Regarding Meta Oversight Board Cases Involving Coded Language and Racial Discrimination...
Following Meta’s Oversight Board review of new cases involving the use of emojis and antisemitic code words to target protected…

December 9, 2025