As part of today’s society, the use of social networks such as Facebook and Twitter serves as a mouthpiece for our own opinions. More and more frequently, there are announcements that among human actors also bots are present which shape the opinion climate in a certain direction (Herrman, 2017; Hern, 2017).  The question is, to what extent do these bots have an influence on majority opinion formation in social networks and can current methods such as agent-based modelling help to answer this question?

When we talk about a bot, we mean an algorithm that is able to perform automated tasks over the Internet without being dependent on human assistance. Here there are different types of bots. In contrast to chatbots, which mainly focus on the active dialogue between the user, social bots act in social networks and have the ability to interact autonomously in the network. This interaction can be as diverse as writing a commentary or liking or sharing a particular article in an online network. By imitating human behavior, it is currently challenging to uniquely identify these social bots in a network and determine whether they are a real person or an algorithm.  Current approaches aim to identify bots using machine learning methods (Subrahmanian et al., 2016). Unsupervised learning methods using clustering (Miller et al., 2014) or supervised learning algorithms are used to predict human coded data (Varol et al., 2017). Often these bots are not only used in the commercial sector to increase the attention of different enterprises and their products, but also in the political context to manipulate the social discourse of individuals. A study by Bessi & Ferrara examined the activity and behavior of social bots in the US presidential elections of 2016. This study identified about 400,000 bots which participated in the political discussion about the presidential elections (Bessi & Ferrara, 2016). In the period from 16 September to 21 October 2016, these bots had written about 3.8 million tweets and, thus, actively participated in the election campaign.

Given this strong presence of bots in online communication, it seems conceivable that these bots are able to convey a certain opinion climate that must not necessarily reflect the opinion climate among the population. In other words, these bots could manipulate what is being discussed and suggest the sense of a majority opinion (e.g., Hillary Clinton is crooked) that factually does not exist. Considering that on social media, information consumers make use of just a few user-generated comments to infer opinion trends in the population (Neubaum & Krämer, 2017), bots could unanimously represent a stance and distort perceptions of the opinion climate in those networks. If users come to the conclusion that the alleged majority opinion (created by bots) does not correspond with their opinion, they might lapse into silence, holding back their viewpoint. This principle has already been proposed by the spiral of silence theory (Noelle-Neumann, 1974): The more I feel my personal opinion in line with the majority, the more I will be willing to express my point of view. In the long run, this would mean that a certain opinion faction (the alleged majority) becomes more and more visible, while another (the alleged minority) vanished from the scene.

Against this backdrop, we ask: To what extent are social bots able to not only convey distorted representations of the opinion climate but also to influence the individual’s opinion expression behavior? In addition to the difficulty of identifying the bots, it is not feasible with current studies to investigate the dynamic processes of the silent spiral with the help of laboratory experiments or field trials without high effort and costs and to simulate the process in detail. The study “Are social bots a real threat? An agent-based model of the spiral of silence to analyse the impact of manipulative actors in social networks” describes an approach using agent-based modelling and the theoretical derivation of the silent spiral theory to investigate the influence of social bots within a network and their influence on manipulated opinions (Ross et. al, 2019). For this study, communication processes in one network have been simulated in an artificial setting, using artificial individuals (i.e., agents). This allowed testing under which circumstances individuals share their political views with others and how this individual behavior accumulates to group behavior in the long run. For the simulation, human behavior was modeled and programmed based on previous psychological and communication literature. This artificial network, though, included not only human actors but also artificial actors in the form of bots which followed a certain behavioral script.

Results showed that it takes 2 to 4% bots to have an impact on the opinion climate in a network, shaping the opinion trend in the direction promoted by the bots. It is not important which opinion the bot represents, but (a) how many connections this network has, (b) where the bot is placed in one network, and (c) whether the bots acts more or less as a human.

The study contributes to the existing research in the field of the spiral of silence and provides new perspectives for the further development of methods in this methodological area. Agent-based modeling offers the advantage that it can be used to represent complex behaviors in a network at different levels and to compute multiple scenarios. Employing this method, various scenarios and their evolution can be compared, and assumptions can be made regarding the influences social bots in a social network have on a majority distribution.

In the course of digitalization and the continuous networking of humans, procedures with agent-based modelling and simulations are becoming more and more important because they enable precise and reproducible results to be delivered with the help of mathematical models based on observations from existing studies. This opens up new possibilities for future research in an interdisciplinary field.

While these new findings offer a pessimistic view on the manipulation potential within online networks, we should consider that this study provides an artificial simulation, that is, an estimation which does not represent the full complexity of human behavior. However, these results should prompt governments and providers of online networks to plan and take measures to identify and eliminate those artificial actors with the intention to manipulate others and to intervene in political deliberation processes.

References:

Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 U.S. Presidential election online discussion. First Monday, 21, 11.

Herrman J. (2017). Not the Bots We Were Looking For. New York Times. Retrieved from https://www.nytimes.com/2017/11/01/magazine/not-the-bots-we-were-looking-for.html

Hern A. (2017). Facebook and Twitter are being used to manipulate public opinion – report. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/jun/19/social-media-proganda-manipulating-public-opinion-bots-accounts-facebook-twitter

Miller, Z., Dickinson, B., Deitrick, W., Hu, W., and Wang, A. H. (2014). Twitter spammer detection using data stream clustering. Information Sciences, 260:64–73.

Neubaum, G., & Krämer, N. C. (2017). Monitoring the opinion of the crowd: Psychological mechanisms underlying public opinion perceptions on social media. Media Psychology, 20, 502–531. doi:10.1080/15213269.2016.1211539

Noelle-Neumann, E. (1974). The spiral of silence a theory of public opinion. Journal of Communication24(2), 43-51.

Ross, B., Pilz, L., Cabrera, B., Brachten, F., Neubaum, G., & Stieglitz, S. (2019). Are social bots a real threat? An agent-based model of the spiral of silence to analyse the impact of manipulative actors in social networks. European Journal of Information Systems (EJIS). DOI: 10.1080/0960085X.2018.1560920

Subrahmanian, V., Azaria, A., Durst, S., Kagan, V., Galstyan, A., Lerman, K., Zhu, L., Ferrara, E., Flammini, A., Menczer, F., et al. (2016). The DARPA Twitter bot challenge. Computer, 49(6):38–46.

Törnberg P (2018) Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLOS ONE 13(9): e0203958.https://doi.org/10.1371/journal.pone.0203958

Varol, O., Ferrara, E., Davis, C. A., Menczer, F., and Flammini, A. (2017). On- line human-bot interactions: Detection, estimation, and characterization. In Proc. Intl. AAAI Conf. on Web and Social Media (ICWSM).

The opinion bots that silence citizens?