MGIMO HEADLINES
XV Asian Conference of Valdai Discussion Club
Visit of IORA Secretary General Salman Al-Farisi
Basic Factors of Influence in Hybrid Warfare
Darya Mazur
School of Governance and Politics, MGIMO University;
Diana Urazova
School of Governance and Politics, MGIMO University;
Abstract
With the advent of Artificial Intelligence (AI) and social media, hybrid warfare has taken on a new landscape where the battleground is no longer limited to conventional military means. New technologies have become powerful tools in shaping global narratives, which is the key element of winning “the battle over people’s minds” that overshadowed conflicts on traditional battlefield. This article aims to examine the ways of using AI and social media in hybrid warfare. In the course of this paper, reliable sources such as academic journals, online databases, and peer-reviewed sources were consulted. A comprehensive overview of the subject matter is presented with a focus on the various ways AI and social media are used in tandem in hybrid warfare. Several key trends and strategies are identified: these include the use of deep fakes, chatbots, and ordinary bots to spread false information and create echo chambers, as well as the creation of alternative media outlets to support specific narratives. The article emphasizes that AI and social media are effectively used to manipulate and deceive entire populations, causing disruptions and destabilization. The paper also highlights the problem of developing countermeasures to these technologies, which are to be elaborated in further studies.
Key words: hybrid warfare, information warfare, disinformation, artificial intelligence, social media
Recently, the discussion on the implementation of hybrid warfare instruments has noticeably intensified. Nowadays they are considered as a widely used form of interstate confrontation with the involvement of non-state actors. Hybrid warfare is usually understood as a type of hostile action in which the aggressor does not resort to a conventional military invasion but suppresses its opponent using a combination of covert operations, sabotage, and cyber warfare (Murray, 2012). The definition of hybrid warfare has several components: power with limited goals, information and economic dimension. Without a clear analysis of the phenomenon components, it is impossible to understand the processes that occur in the political life of a single country or of the world as a whole. In this article, we will look at the information component of hybrid warfare, namely the role and influence of social networks and artificial intelligence.
The concept of hybrid warfare may not seem entirely new, as the use of non-conventional military methods can be traced back to Ancient Greece (Murray, 2012). However, throughout human history the outcome of wars has been determined by the outcome of physical battles, while other methods have been used to ensure the desirable outcome on the battlefield. Nowadays, the very concept of battlefield has shifted. Conventional methods are more and more often used as a supplementary method to win the war over human minds.
The current wave of hybrid warfare has seen an increased use of AI and social media as tools of deception and manipulation. According to Statista, leading online statistics platform, the number of active social media users worldwide was 4.26 billion as of 2021, spending an average of 2 hours and 27 minutes daily on social media[1]. The combination of AI and social media has created a new reality where the speed and scope of information available on the internet can be used to exploit technological, societal, and economic vulnerabilities to achieve strategic objectives.
There is no general approach to the definition of artificial intelligence in science yet, but for the article we find the definition given by American futurist Brett King the most relevant. Artificial intelligence is primarily machine intelligence, which replaces human thinking when making certain decisions or when processing data necessary for the analysis of specific tasks (King, 2016).
As artificial intelligence is actively implemented in the work of social networks, it is playing an increasingly important role in the processing, structuring, and providing information. The use of social media platforms such as Twitter, Facebook, and Instagram[2] has been an integral part of hybrid warfare activities as well (Viren, 2013). The ability to disseminate information quickly and anonymously across a broad audience gives hybrid warfare actors an edge on the battlefield. Algorithms of social networks and news sites can contribute to the spread of misinformation and affect political interaction and activity. Machine learning can contribute to rooting and deepening the bias, which, in turn, can pose a threat to the politicization and radicalization of society as a whole or its individual segments.
AI can facilitate educational activities of journalists and positively contribute to the free flow of information, but it can also be used for disinformation and the dissemination of "fake news". An illustrative example is the case of Cambridge Analytica (CA), an English company that used deep data analysis technologies (in particular, social networks) to develop strategic communication during Internet election campaigns (Matz, 2020). CA was engaged in collecting Internet data on social media users, compiling their psychological portraits and developing personalized advertising (microtargeting using artificial intelligence). The example of CA showed that algorithms that were created to avoid the inherent political bias of a person when posting content on social networks can be used to intentionally disseminate manipulative information among target groups. In some cases, such content may fraudulently include information in the form of news reports, as well as materials used as a means of emotional propaganda.
AI has been used in hybrid warfare activities in a variety of ways, including deep fakes, chatbots, and the automation of social media accounts. AI systems called “generative models” have made great progress in automated content creation, such as images generated from text prompts. Deep fakes are AI-generated images or videos that are manipulated to appear real, providing hybrid warfare actors with the ability to spread disinformation at scale.
To address this issue, we have built on the ABC — Actors, Behaviors, and Content – model. Language models can affect which actors wage influence operations, how they do so, and what content they produce.
Actors: AI models drive down the cost of generating propaganda, so more actors may find it attractive to wage these campaigns. The propaganda is no longer reduced to the actions of certain states, as the new reality involves NGOs and even individuals which act in their own interests.
Behavior: Recent AI models can generate synthetic text that is highly scalable, and often highly persuasive. Language models could enable new tactics to emerge — like real-time content generation in one-on-one chat-bots, which are AI-powered programs that simulate conversation with individuals. Simple chatbots may also be used as a tool of deception. For example, it is proved that Ukrainian Telegram bot “Find your darlings” presented as a project that helps Russian citizens to find their relatives who are fighting in Ukraine and find out whether they are alive, is in fact designed to collect reliable data on members of the armed forces and their families by requesting personal data, which can be used for military purposes.
Content: AI language models may create more impactful messaging compared to propagandists who lack linguistic or cultural knowledge of their target. They may also make influence operations less discoverable since they create new content with each generation.
Another widespread way of using AI and social media in hybrid warfare is creating so-called simple bots. Bot is a program that performs automatic, pre-configured, repetitive tasks and usually simulates real users’ behavior in social media. In the course of hybrid warfare, large numbers of fake but realistic-looking accounts on Facebook and Twitter are created. These accounts are mobilized to regularly post and comment on articles covering local and national politics. The attackers use artificial intelligence (AI) to construct realistic photographs and profiles of account owners to vary the content and wording of their postings, thereby avoiding the sort of replication likely to trigger detection by software designed to identify false accounts. They also build up a significant base of followers, both by having some of the attacker-controlled accounts follow other attacker-controlled accounts and by ensuring that the attacker-controlled accounts follow accounts of real people, many of whom follow them in return. Using fake-news, bots create so-called echo-chambers. The term refers to an environment wherein participants are exposed to beliefs that serve to amplify or reinforce their pre-existing beliefs through communication and repetition within a closed system without prior fact-checking or critical approach to the issue (Ruiz, 2023). As a result, people take what started as deceptions and circulate them as personal views, which leads to further spread of misinformation. The process – intentionally or ignorantly – is then even more enhanced by the creation of alternative media outlets to support specific narratives.
There is scientifically confirmed data proving the use of Artificial intelligence technologies in existing military conflicts in a form of malicious cyber-activity. For example, in 2022, the University of Adelaide in Australia conducted a study in which, using information theoretic measures, sentiment and linguistic analysis, and time series techniques, they investigated the activity of Twitter accounts whose agenda was related to Russia's special military operation in Ukraine. The aim of the study was to determine which of the pro-Russian or pro-Ukrainian segments of the social network was more effective and had a greater impact. But as the study was conducted, the researchers found out that in July 2022, 90% of the activity (user comments, news posts, informational messages) in pro-Ukrainian Twitter accounts was generated by bots (Smart, 2022).
The use of AI and social media in hybrid warfare activities is a significant challenge to national and international security. AI and social media may be used to manipulate and deceive entire populations, causing disruptions and destabilization. The potential risks posed by AI and social media are a complex issue that requires a multidisciplinary approach and dialogue among policymakers, academics, and practitioners, as well as the problem of developing countermeasures to these technologies.
References:
- Murray, W., & Mansoor, P. R. (2012). Hybrid warfare: fighting complex opponents from the ancient world to the present. Cambridge University Press.
- King, B. (2016). Augmented: life in the smart lane. Marshall Cavendish Editions.
- Matz, S. C., Appel, R. E., & Kosinski, M. (2020). Privacy in the age of psychological targeting. Current Opinion in Psychology, 31, 116–121.
- Smart, B., Watt, J., Benedetti, S., Mitchell, L., & Roughan, M. (2022). #IStandWithPutin Versus #IStandWithUkraine: The Interaction of Bots and Humans in Discussion of the Russia/Ukraine War. In F. Hopfgartner, K. Jaidka, P. Mayr, J. Jose, & J. Breitsohl (Eds.), Social Informatics (Vol. 13618, pp. 34–53). Springer International Publishing.
- Viren G. (2013). Modern media: Techniques of Information Wars. Moscow Aspect Press.
- Diaz Ruiz, C., & Nilsson, T. (2023). Disinformation and Echo Chambers: How Disinformation Circulates on Social Media Through Identity-Driven Controversies. Journal of Public Policy & Marketing, 42(1), 18–35.