Policy Guidelines

Introduction

Since the advent of social media and subsequent increasing concern about user-safety and real-world consequences of online harassment and hate, social media platforms have developed digital policies or community standards to self-regulate the use of their platforms and the interaction among users. The primary goals of safety standards like these are ensuring that the digital environment is reflective of the platform’s company values, while also respecting legal obligations and concerns of user-safety. Due to these considerations, social media digital policies also prohibit certain forms of content and sometimes, even specific types of accounts, because the language, imagery or organization that they promote could violate the law or contribute to a toxic digital environment which does not meet the platform’s own standards of what is legitimate use. These policies are in constant evolution as platforms adapt to new forms of hate speech, hateful conduct and user-abuse of social media by actors promoting violence and hatred.

 

However, leaked information from the leading platforms’ own research departments, as well as focused audits, have shown that the enforcement and technological roll-out implementation of these digital polities are abysmal.

 

CyberWell analyzes all of the relevant digital policies and community standards produced by the platforms that we monitor for online antisemitism. We review these rules in order to identify the relevant self-regulations that could be breached by the posting or promoting of Jew-hatred online, whether its classic antisemitism, extremist ideology or modern conspiracy theories blaming the Jews for global problems.

 

The comparative analysis process of the digital policies led us to create the following research and monitoring policy guidelines. These guidelines enable us to identify major policy themes and rules recognized across the digital platforms as “out of bounds” of legitimate social media use. In compiling our guidelines, we also differentiate and identify specific sub-categories that are prohibited by only some of the platforms, but are still consistent within the wider phenomenon of online antisemitism. We rely on these guidelines when assessing whether antisemitic content posted on a platform violates their community standards or not, and reflect that fact back to the CyberWell online database and the social media platforms we report to.

 

CyberWell’s mission is to drive the enforcement and improvement of hate speech policies across the digital space, specifically when it comes to online Jew-hatred, one of the most prevalent forms of online hate today. 

 

As such, our monitoring and research methodology have an unmatched added-value of doing the compliance and brand safety work for the social media platforms themselves when it comes to antisemitism. In order to supplement for the gaps in the existing digital policies, which have led social media platforms to fail to recognize modern forms of Jew-hatred as antisemitic, we also rely on the International Holocaust Remembrance Alliance’s working definition on antisemitism as our core guiding definition to identify what constitutes Jew-hatred. All monitored content is categorized according to one of the 11 examples of antisemitism in the IHRA working definition and which digital policy or community standard is violated.

 

Terminology

This section explains the terminology common to the social media digital policies and used in our own working process of monitoring and reporting online antisemitism. The following definitions reflect the generally accepted common language or point of reference across social media platforms and are referenced throughout the CyberWell’s digital policy guidelines. 

CyberWell focuses research and monitoring on the following platforms: Facebook and Instagram, which are now known as Meta and share common guidelines, TikTok, Twitter, and YouTube.

Content

Content is any kind of information communicated by users, including:  Speech, written posts, Tweets, visual content, including video, images or audio that is shared, published, uploaded or posted on a social media platform.

Protected Characteristics 

Individuals bear specific qualities that define their identity or affiliation to a collective. Certain features are protected by international standards and national laws, which ban discrimination and hate speech targeting people on the basis of what are defined as protected characteristics. 

The broad spectrum of protected characteristics across the platforms include: race, ethnicity, nationality, national origin, citizenship, statelessness, religious affiliation, sex, sexual orientation, gender, gender identity, age, disability, disease, and caste.

 

This definition of protected characteristics and its use throughout digital policy, by protecting against illegitimate speech or conduct, is meant to protect groups or individuals that are part of a group who are identified as part of a collective due to their innate characteristics.

Illegitimate or illegal conduct, such as hate speech or abusive behavior directed toward to an individual or a group on the grounds of one or a combination of several more of these characteristics is widely prohibited on social media platforms.

Violent Organization / Dangerous Organization 

A violent/dangerous organization is recognized as such based on its activity offline, and thus its mere presence online is banned, as well as any expression of support expressed by users for its actions, members, and goals. The following is a general definition based on the platforms’ guidelines: 

The presence, support, praise or calling to violence by groups/individuals which are known terrorist organizations, extremist groups, supremacy groups, hate and criminal organizations and their leaders, founders, prominent members or sympathizers with such an organization. It is generally prohibited to justify violent actions and events carried out by these dangerous actors. This policy includes the prohibition of recruiting for such violent organizations or promoting their presence via visual insignia, flags, symbols etc.

 

Recently, certain platforms have come to recognize this policy to include promoters of violent conspiracy theories and accounts that repeatedly spread online hate or display open hostility to individuals or groups of people based on their protected characteristics.

 

All platforms prohibit content of groups and ideologies that promote violence as well as content that support such groups. Facebook and TikTok include the largest scope of dangerous organizations, including groups that use violence against protected groups, civilians in general, state or the military, motivated by an ideology or sharing the purpose of using violence in the future. YouTube does not carve-out such organizations specifically, but contains the largest scale of prohibited content that praises, glorifies, promotes, or supports violent groups, including showing hostages for intimidating the public.

 

Drawing on the social media platforms’ policies intended to protect against presence, recruitment, and support for dangerous groups, CyberWell collects data on violent/dangerous organizations that use or show intent to use violence against protected groups, civilians in general, or states, and which are motivated by a hateful and antisemitic ideology or by criminal purposes.

General Themes

Six general themes are common to all social media platforms, which are defined and treated similarly across the different policies and standards, including:

 

These themes include the ban of specific entities or a specific behavior. These themes are relevant to the monitoring of antisemitism, which is promoted by extremist entities, motivates illegitimate behavior, and manifests itself through prohibited content.

1. Violence

Any kind of content that includes threats, stated intention, incitement or encouragement, to cause death, bodily injury or physical assault or the stated desire that such acts indirectly occur to individuals or groups, also based on their protected characteristics, which are motivated by an ideology or by criminal purposes, by sharing any form of content.

 

In a broader view, violence also includes language or content inciting, coordinating, advocating, facilitating, justifying, or portraying violent acts, including violent pranks, as positive of justified, encouraging, in images or in praise of violent acts/missions of dangerous organizations/individuals – especially when the event celebrated is that which violently targeted people because of their membership in a protected group and/or may inspire or incite others to violence, online or offline. Similarly, coordinating violence against individuals or groups of a protected characteristic group is prohibited.

2. Dangerous Organizations / Individuals

Organizations, groups of individuals and their members individually that have used, use or show intent to use violence against protected groups, civilians in general or states and their agencies, and which are motivated by an ideology or by criminal purposes.

This also includes groups and individuals who support or advocate for the use of violence, by promoting, praising or glorifying ideologies, symbols, acts or members of said organizations or groups, by sharing any form of content.

3. Dehumanizing & Stereotypical Hate Content

Content that attacks or dehumanizes individuals or groups based on their protected characteristics.
Prohibited content in this category includes speech or imagery in the form of generalizations or comparison to unqualified behavioral statements and/or stereotypes (i.e. tropes). Digital platforms have policies against this content because it can embrace a hateful ideology, inspire hatred or fear of said groups and even incite violent acts against that group or its members. Some platforms address Holocaust denial through this community standard.

4. Harassment & bullying

Repeated targeting of an individual or group of individuals, based on their protected characteristics, through any sort of content or communication that uses abusive language, targeted cursing, mockery or slurs and that is intended to cause humiliation, embarrassment, degradation, shaming, disrepute or stigma.

 

This also includes threatening, intimidating, inciting or encouraging others to harass and bully others on the platform.

5. Graphic Imagery

Content and media that glorifies violence, celebrates the suffering or humiliation of others or is excessively graphic, and may or may not be shared for sadistic purposes.

 

This can include content that shows violent acts in detail, such as fights, assaults, murders, torture, terrorist attacks, victims of violent acts or disasters, including injured, burned, mutilated, dismembered or dead bodies.

The policy against graphic imagery is meant to prevent the praise of the perpetrators of such acts, the encouragement of the audience to hate or act against an individual or group that is considered responsible for such acts or maliciously assign responsibility to a particular group or individual for violent acts.

6. False Information

False information, including false news and/or manipulated images, media, content or sound to give the impression of statements or events that did not happen.

 

The platforms have evolved specific community standards regarding misinformation on COVID-19, which is a specific category.


Specific Categories

There are specific categories of prohibited content or social media behavior that are relevant to the phenomenon of antisemitism, which are addressed differently by each of the digital platforms. These include:

 

These categories are relevant to numerous forms of antisemitic expressions and are regulated differently across the platforms. These sections include the relevant policies for each platform and the definitions cited by CyberWell, which we use to monitor, analyze and report antisemitic content across the digital space.

1. Holocaust Hate Speech

CyberWell refers to Holocaust hate speech as a distinct category of hate speech, which includes any form of Holocaust abuse that furthers conspiracy theories and hate speech.

Digital platforms address Holocaust denial and distortion together under various policies and standards. The following is a brief description of the sections and standards which address Holocaust hate speech across the digital space:

 

Facebook – explicitly prohibits Holocaust distortion (hate speech, Tier 1).

 

TikTok – prohibits the denial of well-documented events against groups of protected characteristics (hateful behavior).

 

Twitter – prohibits hateful conduct, also in the form of “references to mass murder, violent events” which include the Holocaust.

 

YouTube – prohibits content that “den[ies] that a well-documented, violent event took place”, which also includes the Holocaust (hate speech policy).

 

CyberWell collects Holocaust hate speech in the numerous forms which it assumes, including denial, distortion, trivialization and general abuse referred to its scope, victims, historical circumstances, survivors, research and memorialization. Images and lexicon associated with the Holocaust are increasingly used in different forms of hate speech.

 

Our core reference for identifying Holocaust hate speech is the fourth, fifth and tenth examples of the working definition of antisemitism by the International Holocaust Remembrance Alliance:

2. Conspiracy Theories that Incite Violence

Four out of the five major social media platforms monitored by CyberWell regulate content that promotes conspiracy theories that can incite violent acts. To date, Twitter has not explicitly prohibited content that promotes violence-inspiring conspiracy theories.

 

CyberWell monitors conspiracy networks as individuals or groups that promote opinions and beliefs, which are considered as truths and that portray specific groups and their members, also because of their protected characteristics, as the source of moral and social corruption, evil, and malignant forces or ascribe to them plans of domination, control, disruption or disarray of institutions, the social system, or economic, political and financial crises. In this frame, conspiracy networks develop hate speech toward groups and their members, also by fabricating news and media, often with the intent to inspire, encourage or directly call for targeting, attacking, or discriminating against said groups and their members.

3. Explicit Threats

This is a specific category recognized by TikTok and Twitter, with a particular threshold of the statement of intentionality and violent outcome by the user making the threat on the platform against an individual or a group.

CyberWell tracks explicit threats against individuals or groups, of any form of physical violence or assault, which is often expressed during antisemitic campaigns or via conspiracy theories that promote Jew-hatred. Since this content has legal ramifications, CyberWell will pass this content on to law enforcement agencies via strategic partners that we collaborate with in the space.

4. COVID-19 Misinformation

In response to the spread of fake news concerning the COVID-19 pandemic, platforms have banned under various sections COVID-19 misinformation, defined as content that denies the existence of COVID-19, also through conspiracy theories, or spread misinformation on the nature and scope of the pandemic, also in contradiction to accepted medical knowledge and public health regulations.

 

CyberWell collects data on misinformation regarding COVID-19 virus specifically referring to associated forms of antisemitic hate speech and conspiracy theories.

5. Hate Imagery

Content that contains names, symbols, insignia, logos, flags, slogans, uniforms, gestures, salutes, illustrations, images, portraits, caricatures, songs, music, lyrics, or other objects related to a hateful ideology that has the intention to promote hostility or hatred against people on the basis of protected characteristics. This includes visual comparisons, generalizations, objectification, dehumanization or tropes meant to victimize individuals or groups based on their protected characteristics.

 

Conscious that hate speech acquires different forms, including discursive and visual, CyberWell collects data on hateful imagery, including images, logos, salutes, uniforms and symbols, associated with hateful ideology, hateful organizations, or used to depict and spread stereotypes, misconceptions and tropes that vilify, denigrate, or de-humanize a protected group and its members; ridicule violent events or their victims; intimidate or promote violence against a protected group.

6. Coordinated Discrimination or Segregation of Groups or Individuals because of their Membership in a Group

Inciting, advocating, justifying and/or promoting the exclusion, segregation, and discrimination against a protected group and its members in order to cause economic damage, social penalization or denial of goods and services, on or off-platform, to the group and its members. It is important to note that Twitter makes a specific carve-out for content created with the political commentary, political boycotts and demonstrations.

 

CyberWell is committed to free speech, freedom of opinion and freedom of expression, which also means freedom from hatred. While calls to boycott can be a form of political expression and encouragement to social engagement, they may also include hate speech, false news and misinformation. This is particularly true when a call to boycott is based on false assumptions, manipulated media or unsubstantiated news.

 

This kind of misinformation directly results in a call to discriminate against, exclude and deny services or goods to individuals or groups because of their protected characteristics, including national origin, nationality, religion, political affiliation or belief. 

 

CyberWell collects data on hate speech and misinformation related to campaigns and single content calling for sanctioning individuals, groups, and their associated businesses because of their protected characteristics.