MGIMO HEADLINES
XV Asian Conference of Valdai Discussion Club
Visit of IORA Secretary General Salman Al-Farisi
AI-based Technologies in Hybrid Conflicts
Elizaveta Grishankova, School of Governance and Politics, MGIMO University;
Margarita Ledyayeva, School of Governance and Politics, MGIMO University;
Ivan Panchenko, School of Governance and Politics, MGIMO University;
Abstract
The use of AI in military operations, including conventional warfare tactics and unconventional methods of countering the enemy such as bots and cyber forces, has both benefits and risks. AI has the potential to enhance military capabilities, provide better situational awareness, improve decision-making, and increase the speed and accuracy of operations. The efficiency and low cost of operation of bots and cyber forces have made them common inhabitants of social networks, but they also raise concerns about disinformation, trolling, censorship, and the manipulation of people's consciousness. Therefore, ethical considerations must be taken into account in the development and use of AI-based technologies in military operations to ensure legal and ethical compliance.
Keywords: AI-based technologies, hybrid conflicts, national security, conventional warfare tactics, machine learning algorithms, decision-making, autonomous drones, bots, cyber forces, social networks, information attacks, ideological processing, ethical considerations.
Introduction
In recent years, the use of AI-based technologies in hybrid conflicts has become a significant concern for national security and global stability. Hybrid conflicts refer to complex warfare scenarios that involve a combination of conventional military tactics, cyber warfare, and information operations. AI-based technologies, such as machine learning algorithms, natural language processing, and deep learning systems, can be used to enhance the effectiveness and efficiency of hybrid warfare operations. However, the use of these technologies also poses significant risks and challenges, such as the potential for unintended consequences and the need to ensure ethical and legal compliance.
In this article, we will explore the use of AI-based technologies in hybrid conflicts, the potential benefits and risks associated with their use and the ethical considerations that must be taken into account.
Main Part
AI has shown particular promise both in conventional and unconventional warfare tactics.
Artificial intelligence (AI) is increasingly being used in military operations, including conventional methods. AI has the potential to enhance military capabilities by providing commanders with better situational awareness, improving decision-making, and increasing the speed and accuracy of operations.
The use of AI in conventional warfare tactics has already been seen in various military operations. For instance, the United States Air Force is currently developing autonomous drones that can make decisions without human intervention, which could be used in combat situations in the future. In addition, the Israeli Defense Forces have used AI to analyze drone footage and provide real-time intelligence to commanders on the ground during military operations in Gaza. The US military has also used AI to analyze satellite imagery to identify potential threats and track the movement of enemy forces.
In addition, developments in this sphere are continuing: American billionaire Peter Thiel's company Palantir has unveiled an AI platform to run big language models like GPT-4 and its alternatives on private networks. The platform is designed to help analysts and commanders make sense of large amounts of data from multiple sources, such as intelligence reports, sensor data, and social media. It uses AI algorithms to identify patterns, connections, and anomalies in the data and present them in a way that is easy to understand. The platform has been used in a range of military operations, from counter-terrorism to logistics planning.
These examples demonstrate the significant potential benefits of using AI in military operations.
However, the highest activity of AI is still in the unconventional methods of countering the enemy. The most prominent examples of such use are bots and cyber forces.
Oxford dictionary defines bot as “a computer program that runs automated tasks over the internet”. However, the automation of this software is nothing but narrow artificial intelligence.
The efficiency of bots and their low cost of operation have made politicians and members of the media take notice. Bots have become common inhabitants of social networks and now make up a very impressive number of accounts. For example, in Twitter, the number of artificially created accounts is 9-15%, and in Facebook, about 5%.
Programmable bots can leave comments, reactions and even write to other users on the network. However, bots created by programmers are quite primitive and are unable to retain the look of a live account.
Because of this, commercial companies, political parties and even governments have begun to organize teams of cybersoldiers to be used in hybrid conflicts.
Cyber-soldiers are understood as people who are paid to maintain dozens of accounts on different social networks on a regular basis, use them to support state ideology and can also form a deliberately false public opinion in times of conflict. The size of these Internet accounts varies from country to country, depending on the budget allocated.
An important issue in the study of the phenomenon of bots and cyber forces is the strategy of their actions, as well as the tone of the messages sent.
- Disinformation. This strategy includes publishing fake news, fake memes and other deceptive information that misleads people.
- Trolling and harassment. Another type of communication is trolling and stalking on the Internet. Trolls are often thought to consist of groups of young people or students, although in practice such teams can consist of a wide variety of people. Through this technology, bots silence a particular source. For example, fake accounts can label or target a company, politician or alternative media with defamatory language.
- Censorship. Cybersoldiers also actively censor content by mass deletion of accounts and information. This manipulative technique lies in the vulnerability of social networks, or rather in their algorithms. Thus, fake accounts report allegedly illegal content to the social network administration en masse, thereby forcing algorithms to block posts.
- The "shared wagon theory." Cybersoldiers also use the shared wagon strategy by flooding fake accounts in the comments of some post. The purpose of this action is to make an individual feel that an opposing viewpoint on a given situation is supported by more people. The emphasis is on the individual's desire to belong to a larger group of people, because he or she is afraid of being outnumbered and of becoming something of an outcast.
Thus, bots and the teams of cyber-soldiers created to control them pollute the ecosystem of the digital space, creating fake accounts and conducting daily information attacks on people's consciousness. The situation is aggravated by the fact that every year researchers find more and more countries using this tool of manipulation, and the cost of maintaining digital mouths on Facebook alone has already reached $10 million. Given the gradual transition of people into the Internet space, we can say that people are now subjected to daily ideological processing by different political actors. Such bombardment polarizes society and sometimes undermines people's trust in the institutions of power and government officials in particular.
In view of the above, the ethical aspect of the use of AI technology in hybrid warfare cannot be overlooked. The use of AI in military operations raises ethical concerns about the potential for unintended consequences and the violation of human rights. For example, the development of autonomous weapons systems that can make decisions without human intervention puts to questions about accountability and the potential for errors or malfunctions. The use of AI in cyber warfare and information operations also raises concerns about the manipulation of public opinion and the potential for propaganda and disinformation campaigns. It is crucial for military organizations to ensure that the use of AI-based technologies is guided by ethical principles, such as transparency, accountability, and respect for human rights, and that appropriate safeguards are put in place to prevent unintended consequences and ensure compliance with international and national laws.
Conclusion
In conclusion, the use of AI in hybrid conflicts presents both opportunities and challenges for military operations. AI-based technologies can improve situational awareness, decision-making, and the speed and accuracy of operations. However, the use of autonomous weapons systems raises ethical and legal concerns. Moreover, the proliferation of bots and cyber-soldiers in social media networks presents a serious threat to the integrity of information and public discourse. The strategies used by these actors, such as disinformation, trolling, censorship, and the "shared wagon theory," have a manipulative effect on people's consciousness, polarizing society and undermining trust in institutions. It is essential that the ethical and legal implications of using AI in military operations are carefully considered, and that measures are taken to prevent the abuse of AI-based technologies in hybrid conflicts.
References
- Anna Konert, Tomasz Balcerzak Military, autonomous drones (UAVs) – from fantasy to reality. Legal and Ethical implications. // Transportation Research Procedia. – 2021. - №59. - С. 292-299.
- Peter Aitken, Yonat Friling, Israel military builds up AI battlefield tech to hunt Hamas terrorists, protect against Iran threat // Fox News. - 14.04.23.
- Phil Stewart, INSIGHT-Deep in the Pentagon, a secret AI program to find hidden nuclear missiles // reuters. - 5.06.18.
- Palantir Demos AI to Fight Wars But Says It Will Be Totally Ethical Don’t Worry About It // vice.com URL: https://www.vice.com (дата обращения: April 26, 2023).
- Definition of bot noun from the Oxford Advanced Learner's Dictionary // Oxford Learner's Dictionary URL: https://www.oxfordlearnersdictionaries.com/definition/english/bot (дата обращения: April 26, 2023).
- Панченко И.А. Интернет пространство как новое поле манипуляции массовым сознанием на примере современной России: ВКР 41.03.04 Политология. - М. , 2022. - 60 с.
- Гришанкова Е.А. Национальная стратегия развития ИИ на период до 2030: курсовая работа: М. , 2022. - 31 с.