Nowadays, social networking sites are an essential part of our daily life. They enable us to be globally connected but also to be involved in shaping socially relevant topics and inform us about recent events in a matter of seconds. On the Internet, users meet people who advocate for different values or have a different ideology. Such a convergence of different views is essential for the new media landscape, which is an important part of our democracy. The confrontations between people with different opinions on social networking sites have also made so-called hate speech a problem (Kettrey & Laster, 2014). A representative opinion survey conducted by the Landesanstalt für Medien in North-Rhine Westphalia in 2018 showed that 78% of those who participated had already perceived hate speech on the Internet (Landesanstalt für Medien NRW, 2019).

There is no generally accepted definition for the term hate speech. However, the existing consensus is that hate speech targets marginalized social groups in a way that is potentially harmful to them (Jacobs & Potter, 2000; Walker, 1994). It is thus a group-based misanthropy that finds its expression in violent language. This group-focused hostility includes stereotypes, prejudices and discrimination against people based on their actual or perceived belonging to a socially marginalized group. These devaluations assume that certain groups of people are of lesser value than others, thus denying them equal rights. Hate speech devalues some groups of people more than others: In the process, it takes up certain power and discrimination structures that are already widespread in the “analogous” society.

The multiple faces of hate speech

Discriminating structures such as discrimination against Sinti and Roma, classism (prejudices based on social background) and ableism (hostility towards the disabled) are particularly frequently addressed. These verbal attacks on individuals or groups are often based on certain attributes, such as origin, religious affiliation, social status, skin color, gender, sexual orientation or combinations thereof (Delgado & Stefancic 2004). There is barely a human characteristic that is not made an object of hatred, which is why the list of affected groups of people is incomplete (Maibauer, 2013). It should also be noted that not categorization per se is part of hate speech, but discrimination and expression of hate based on mere categorization (Maibauer, 2013 cited according to Graumann & Wintermantel, 2007).
So-called ‘haters’, thus the offenders, are often people who have the goal of disrupting communication and spreading its degrading and insulting contents (Bundeszentrale für politische Bildung, 2017). Most of them want to bring their ideology to other users.

Hate speech can appear in different forms. In addition, strategies of concealment are often used to make hate speech not easily recognizable for every user.
Maibauer (2012) summarizes the different forms of hate speech in five dimensions, whereby the first dimension distinguishes between direct and indirect hate speech. Thus, the perpetrator can express his hatred directly (e.g.: “You foreigners have no place in this country”) or indirectly (e.g.: “You would be better off abroad”).
In the second dimension a distinction is made between overt and covert Hate Speech. Open hate speech can be found in many Internet forums that explicitly invite hate speech and both forms can be found in social networking sites, among others. According to Maibauer, a discussion about the “unwillingness to integrate” of foreigners that is broadcasted on television can already be hate speech. The third dimension takes up the aspect of power. Because of the discriminatory character of hate speech, it is usually associated with a power differential between the perpetrator (as part of the social majority) and the addressee (as part of a social minority). The fourth- and fifth-dimension states that hate speech can be linked to a threat of violence and can be categorized in strong or weak form.

The framing effect also plays a central role in the context of hate speech, especially when it is covered up. The framing effect refers to messages that are formulated differently with the same content which in turn influence the behaviour of the recipient in different ways. Language is never neutral, since the words that are used influences our thinking and therefore our actions (Tversky & Kahneman, 1981). Accordingly, the same information can lead to different behaviour. Devaluating or racist framing can have a highly negative effect on recipients. Thus, the term “foreigner” has a completely different effect than the word “immigrant”. On the other hand, the effects that the framing of a message can have may be very individual. Some people can be strongly influenced, while others are hardly susceptible to framing (Shu Li & Xiaofei Xie, 2016). Brain scans have shown that this depends on how strongly the prefrontal cortex is activated, which is also involved in action planning (Raab et al., 2009).

Consequences of hate speech

Hate speech can have wide-ranging consequences. Generally, it can be said that through hate speech the dignity of the human being is disregarded. Victims are defamed and marginalized, which in turn can provoke violence.
In addition, it happens often that hate speech is not called out, denounced as such and met with counterspeech. In some cases, discriminatory statements are even ignored and almost accepted (Kaspar et al., 2017). In the long term, this can have the consequence that social outrage diminishes, and pejorative statements become legitimate or even acceptable. In the worst case, hate speech forms a breed ground for real assaults.

Furthermore, messages of hatred are a form of psychological violence. They are associated, for example, with increased depression, sleep disorders and even suicide (Bilewicz & Soral, 2020). The Compact Study 2019 found that especially young people suffer from encountering hate speech. Every second person reported emotional stress. 31% of those affected confirmed depression and 42% reported problems with their self-image since then (Geschke, Klaßen, Quent & Richter, 2019).
On the Internet, hate messages can also distort opinion climates and cause polarization. Thus, the increased online presence of right-wing extremist actors in the past seems to have moved anti-democratic propaganda into the center of society. As a result, right-wing extremist language and images have manifested in the mainstream within a short period of time (Amadeu Antonio Foundation, 2018).
Another possible consequence of hate speech is a shift in perception. If “haters” are overrepresented in the digital world, it seems that they are also the majority in the “analogue” world. In a representative survey, the statement “Public hatred on the net has changed what can and cannot be said outside of the Internet” was agreed by 59% of the participants (Geschke, Klaßen, Quent & Richter, 2019). In addition, extremist groups receive excessive attention and can thus spread their views to a broad population.

What can be done against hate speech?

Considerung the possible consequences that hate speech can have, social networking sites such as Twitter and Facebook have received strong criticism.  They are accused of not being sufficiently active in the prevention of hate speech. There are calls for the introduction of guidelines that prohibit the use of their platforms to attack people based on characteristics such as ethnicity, gender, or sexual orientation. In response, the “Netzwerkdurchsuchungsgesetz” (NetzDG) came into force in 2017. However, social networking sites still have to face the problem of identifying and censoring hate speech (Moulson, 2016), and to consider whether this restricts freedom of expression. As definitions of hate speech vary, identification must be done manually and on a case-by-case basis (Lomas, 2015).

In this context, there is a growing debate about the extent to which regulation of hate speech on social networking sites is compatible with the liberal democracy that prevails in Germany.
On the one hand, it seems questionable to censor hate speech, since the German constitution guarantees open and unrestricted expression of opinion. The speakers’ freedom of expression would thus be restricted. It is also argued that this would not only restrict the rights of the person generating a hate speech, but also those of the recipients. Regardless of how controversial the opinion may be, people have a right to access this information and are otherwise deprived of both freedom of information and freedom of choice (Maibauer, 2013).
However, the other side argues that democracy as a whole should be protected from its so-called “enemies.” According to them, freedom of expression must be restricted under certain conditions. Indeed, victims of hate speech could be intimidated to such an extent and fear that they might be silenced (Maibauer, 2013). Hence they are deprived of their democratic participation rights, wherefore a regulation of hate speech is demanded.
On the other hand, there is protection against discrimination, which is also enshrined in the constitution. Bleich (2011), by contrast, notes that there is a possibility of enacting and enforcing laws that restricts certain forms of hate speech without limiting freedom of expression too much. Ultimately, the regulation of hate speech will probably always remain a case-by-case balancing of fundamental rights.

Everyone who uses social networking sites bears a certain responsibility. Thus, everyone can ensure that hate speech and especially the consequences thereof do not take on far-reaching dimensions. To accomplish this, it is important to create a digital civil society. This can mean to not look away when you encounter hate speech on social networking sites. It also means to actively stand up against hate. It is necessary to show civil courage offline and online and thus demonstrate that hatred and harassment have no place in our society. As the study by Garland et al (2020) already showed, counter speech actually has a depolarizing effect and encourages people to engage in further counter speech. The result is a drastic reduction in the incidence of hate speech.

Hate speech, especially against minorities, has increasingly become a problem in digital societies. This has serious consequences for the victims and for society, as it reduces the coarsening of discourse and the willingness to participate in digital space. Hate speech is often implicit, and hence not directly tangible. However, it can be fought by the platform providers and through state intervention. Ultimately, each individual can contribute to an improvement in social interaction through courageous intervention.


Amadeu Antonio Stiftung.(2018). Was ist Hate Speech?.

Bilewicz, M., & Soral, W. (2020). Hate Speech Epidemic. The Dynamic Effects of Derogatory Language on Intergroup Relations and Political Radicalization. Political Psychology, 41(S1), 3–33.

Bleich, E. (2011). The Rise of Hate Speech and Hate Crime Laws in Liberal Democracies. Journal of Ethnic and Migration Studies, 917–934.

Bundeszentrale für politische Bildung. (2017, Juli 12). Was ist Hate Speech? | bpb.

Garland, J., Ghazi-Zahendi, K., Young, J.-G., Hebert-Dufresne, L. & Galesic, M. (2020). Impact and Dynamics of Hate and Counter Speech Online. Arxiv, 1–15.

Geschke, D., Klaßen, A., Quent, M. & Richter, C. (2019, Juni). #Hass im Netz: der schleichende Angriff auf unsere Demokratie. Eine Bundesweite Repräsentative Untersuchung. compact.

Jacobs, J. B. & Potter, K. (2000). Hate Crimes: Criminal Law & Identity Politics (Studies in Crime and Public Policy) by James B. Jacobs (2000-12-28). Oxford University Press.,+J.+B.,+and+Potter,+K.+2000.+Hate+crimes:

Kaspar, K., Gräßer, L. & Riffi, A. (2017). Online Hate Speech – Perspektiven auf eine neue Form des Hasses. kopaed verlagsgmbh.

Kettrey, H. H. & Laster, W. N. (2014). Staking Territory in the “World White Web”: An Exploration of the Roles of Overt and Color-Blind Racism in Maintaining Racial Boundaries on a Popular Web Site. Social Currents, 1(3), 257–274.

Landesanstalt für Medien NRW. (2018). Ergebnisbericht: Hassrede. Forsa, 1–11.

Lomas, N. (2015, Dezember 16). Facebook, Google, Twitter commit to Hate Speech Action in Germany. TechCrunch.

Maibauer, J. (2012). Hassrede/Hate Speech : Interdisziplinäre Beiträge zu einer aktuellen Diskussion. CORE Reader.

Shu Li & Xiaofei Xie PhD (2006) A new look at the “Asian disease” problem: A choice between the best possible outcomes or between the worst possible outcomes?, Thinking & Reasoning, 12:2, 129-143, DOI: 10.1080/13546780500145652

Raab, G., Gernsheimer, O., Schindler, M. (2009). Neuromarketing: Grundlagen – Erkenntnisse – Anwendung. 2. Auflage, Gabler, Wiesbaden

Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453 – 458

Walker, S. (1994). Hate Speech: The History of an American Controversy. University of Nebraska Press.,+S.+1994.+Hate+Speech:+The+History+of+an+American+Controversy.

Hate Speech – When words become weapons in online networks