JOURNAL OF GOVERNANCE AND POLITICS

JOURNAL OF SOCIETY AND THE STATE

SCHOOL OF GOVERNANCE AND POLITICS, MGIMO UNIVERSITY, RUSSIA

AI versus Social Media in Information Warfare

Kirill Chernyavskiy
School of Governance and Politics, MGIMO University

Georgy Novikov
School of Governance and Politics, MGIMO University

Pavel Lapin
School of Governance and Politics, MGIMO University

Abstract

The article explores the intersection of social media and artificial intelligence (AI) in shaping global narratives and its impact on international relations in the era of hybrid warfare. The main field of research is political science and its relationship to new technologies. The first part of the article analyses the impact of social media and AI on politics. It then looks at how the use of the claimed tools can be regulated and limited. The use of social media and AI for propaganda dissemination and the creation of confusion has become a critical component in the battle for public opinion. Machine learning algorithms enable the identification and targeting of specific demographics with tailored messaging that reinforces their beliefs or biases. However, this tool can also be used to exacerbate tensions between nations, leading to a breakdown in trust and increasing the risk of conflict. The rise of social media and AI has created a complex regulatory landscape that is difficult to navigate, making it necessary to re-evaluate the future of global politics.

Introduction

In the interconnected world, the role of social media and artificial intelligence (AI) in forming global narratives has magnified significantly. Platforms such as Facebook*, Twitter*, Instagram*, (*organizations banned on the territory of the Russian Federation) and YouTube offer unparalleled immediacy and reach, affording users to share experiences and opinions in real-time. This has thus propelled social media’s emergence as a potent force in influencing public discourse, fuelling communities, and even driving political and social events.

Beyond merely content sharing, AI technologies wield tremendous power in curating material, personalizing recommendations, and targeting advertisements on these platforms. AI algorithms scan colossal amounts of user data, including browsing history, trends, and engagement habits to customize content for individual users. This personalization has radically revolutionized the information landscape, as AI functions as a gatekeeper in the narratives people access.

The integration of social media and AI furnishes numerous advantages, granting users access to content and perspectives which were previously unobtainable. Irrefutably, it gives voice to marginalized groups, allowing them to express themselves and take part in global conversations. Furthermore, the technology can help curb the propagation of false information and misinformation, issues which continue to rise in the digital era.

Still, the impact of using AI in moulding global narratives evokes apprehension. AI-derived selections which dictate what users observe on their feeds can inadvertently generate filter bubbles and echo chambers, exacerbating existing biases and constraining exposure to disparate angles. This can possibly lead to opinion polarization and the dismantling of a shared reality, inhibiting meaningful dialogue.

The legality and morals of employing AI algorithms to curate and personalize material also remain a contentious point. AI is prone to produce discriminatory outcomes, intensify untruths, and violate user privacy. The obscurity and accountability of AI programs necessitates scrutiny of the technology's inner workings and its potentially disastrous effects on global narratives.

To effectively address this issue, we must seek ways to exploit the potential of social media and AI while avoiding adverse effects. Building responsible AI models, exercising transparency, and enabling user freedom can ensure that worldwide stories are composed in a fair, moral, and beneficial manner. The need for social media associations, AI researchers, policy makers, and civil society to work hand in hand to develop laws and frameworks elevates. Comprehending the intricate interaction between AI and social media is a prerequisite for tackling the ever-shifting landscape of global narratives with greater awareness and acuteness.

Background

The Role of Social Media in Shaping Global Narratives

Social media platforms have revolutionized the way information is disseminated, presenting unparalleled possibilities to shape global narratives. From Facebook and Twitter to Instagram and YouTube, these platforms have become invaluable tools for amplifying diverse voices and creating real-time conversations on a global scale.

Social media allows individuals to take an active role in the dissemination of news content. Individuals can now create and share media from their own unique perspectives, allowing marginalized stories to come to the forefront and disrupting dominant narratives previously overlooked by traditional media outlets.

Zeynep Tufekci in her research work “Twitter and Tear Gas: The Power and Fragility of Networked Protest” [1] acknowledges the immense power of social media in facilitating social movements. These platforms have enabled marginalized groups to mobilize, share their stories, and challenge existing power dynamics. Moreover, the reach and speed of social media have allowed for easy sharing of information and ideas across geographical and cultural boundaries, resulting in the forging of global narratives and a shared understanding of various issues.

However, there are also drawbacks to this newfound power of social media on global narratives. Algorithms employed by these platforms tend to prioritize content based on user engagement metrics, creating echo chambers and filter bubbles. As a result, users may be exposed to limited perspectives, reinforcing their own biases and impeding accurate understanding of complex issues as warned by Eli Pariser in his book “The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think” [2].

Furthermore, the trustworthiness of social media is challenged by the spread of misinformation and disinformation. Kate Starbird, an expert on online misinformation, highlights in her research how social media can be a powerful enabler of false information, influencing political opinions and decision-making across the world. It is therefore essential to develop strategies to actively limit the impact of misinformation on global narratives.

Social media has revolutionized the way news content is created and shared, with global narratives now heavily dependent on the platform. While individuals and communities have been empowered to create their own unique perspectives, the risks of algorithmic biases, echo chambers, and false information remain. Understanding the potential of social media as well as the challenges it presents is the key to effectively leveraging its power and mitigating its negative consequences.

AI Influence on Global Narratives

The role of Artificial Intelligence (AI) algorithms in shaping global narratives on social media is undeniable. As algorithms fuelled by data analysis and machine learning techniques curate content, personalize recommendations, and target advertising, these powerful tools have the potential to profoundly alter public opinion.

Perplexity measures the complexity of text, while burstiness assesses the variations in sentences. To maximize the impact of AI algorithms both components must be considered. For example, AI driven curation can create filter bubbles and echo chambers that reinforce existing beliefs and limit exposure.

What should be taken to consideration are the perils of personalized recommendations, as AI algorithms prioritize content that is likely to keep users glued to the platform. Similarly, algorithmic bias can lead to the suppression or amplification of certain narratives based on factors such as race, gender, or socio-economic status.

On the other hand, research on AI systems for misinformation detection provides hope, as AI systems can flag potentially misleading content and help mitigate the spread of fake news.

To ensure a balanced and inclusive information ecosystem, it is imperative that AI algorithms are used responsibly, and that filter bubbles, biases, and misinformation are addressed. This would ensure that global narratives are shaped ethically and without prejudice.

Research Description

Promoting Responsible AI and Social Media Use

As the influence of Artificial Intelligence (AI) and social media on the formation of global narratives continues to grow, the need for responsible practices to ensure ethical and beneficial outcomes becomes increasingly clear. By illuminating the challenges associated with AI algorithms and social media use, we can strive towards an equitable, transparent, and accountable information ecosystem.

Achieving greater transparency in the development and deployment of AI algorithms is essential. In their paper “There Is a Blind Spot in AI Research” Kate Crawford and Ryan Calo [3] emphasize the importance of raising the veil of obscurity surrounding AI algorithms. By implementing measures that permit the investigation of biases, discrimination, and ethical issues within such systems, increased scrutiny and accountability can be promoted. Only then will AI properly support the formation of more equitable global narratives.

The research of Timnit Gebru [4] and Joy Buolamwini [5] addresses the issue of algorithmic bias and provides a framework for identifying, understanding, and defending against them. Gebru and Buolamwini stress the pernicious effects of manifesting prejudice and discrimination in AI systems, potentially increasing inequity across race, gender, and socio-economic levels. To mitigate such biases, rigorous auditing of AI algorithms is required, followed by the representation of diverse and inclusive AI development teams.

By undertaking voluntary practices such as increasing transparency, carefully monitoring for algorithmic bias, and representing diversity in the field of AI, we can strive towards the responsible shaping of global narratives.

Transparency, accountability, user empowerment, and digital literacy are all essential elements for creating a responsible AI and social media environment. By equipping users with the digital skills to distinguish credible sources and navigate complex online information, they can actively engage in informed digital interactions. Media literacy education and public awareness campaigns can enhance users’ ability to critically evaluate global narratives. Collaboration between stakeholders, including social media companies, AI researchers, and policymakers, is key to the development and implementation of regulations, guidelines, and ethical frameworks that promote fairness and user empowerment in digital media. Through embracing transparency, addressing algorithmic bias, and facilitating multi-stakeholder dialogue, a global information ecosystem can be formed that upholds diverse perspectives, mitigates biases, and encourages the dissemination of meaningful narratives.

Results

The intersection of social media and artificial intelligence (AI) has spawned a new era of global narratives – one where information is disseminated, consumed, and shaped in far-reaching and multifaceted ways. These platforms have become vibrant agents of change, allowing individuals and collectives to actively contribute and modify content, disrupt traditional narratives, and promote voices that have been historically side-lined. AI algorithms, meanwhile, are concurrently influencing these processes, curating content, personalizing recommendations, and addressing misinformation.

However, the effects of social media and AI on global narratives also come with trouble. Filter bubbles, algorithmic bias, and the rapid spread of misinformation all pose dangerous threats to the inclusivity and integrity of these systems. To combat such challenges and foster responsible AI and social media use, heightened transparency and accountability of algorithmic processes are paramount. Implementing greater checks and disclosures of AI technologies could help mitigate potential biases, making sure that global narratives are not undermined by unfairness or discrimination.

Moreover, empowering users through media literacy education will be a critical step towards promoting responsible social media utilization. Through bestowing individuals with the skills to assess information accurately, determining credible sources, and engaging in educated debates, an informed user base can be cultivated. Moreover, collective efforts among social media companies, AI academics, policymakers, and civil society organizations are needed to set regulations, guidelines, and ethical frameworks that regulate the development and deployment of AI algorithms in a responsible manner.

Conclusion

Ultimately, the aim in using social media and AI to craft global stories ought to be the creation of an information ecology that champions diversity, inclusivity, and ethical conduct. To encourage clarity and negate any biases, while at the same time emphasizing the desire for user control and galvanizing joint efforts, the effectiveness of social media and AI in promoting mutual understanding, by-passing dominant narratives, and driving positive change can be harnessed. It is only when the different sectors of individuals, groups, and society make their collective efforts to grapple with the difficult aspects of social media and AI that we can contemplate the possibility of a future, where these tools embody the purpose of promoting discussion, comprehension, and represent the unique features of different voices in refining the narratives that eventually frames the lives of people worldwide.

References:

[1] – Tufekci, Z. (2017). Twitter and Tear Gas: The Power and Fragility of Networked Protest.

[2] – Pariser, E. (2012). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think.

[3] – Crawford, K., & Calo, R. (2016). "There Is a Blind Spot in AI Research."

[4] – Gebru, T., et al. (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification."

[5] – Buolamwini, J. (2020). "The State of AI Ethics Report."

Bibliography:

  1. Buolamwini, J. (2020). "The State of AI Ethics Report."
  2. Crawford, K., & Calo, R. (2016). "There Is a Blind Spot in AI Research."
  3. Gebru, T., et al. (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification."
  4. Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation
  5. Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism.
  6. Pariser, E. (2012). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think.
  7. Schwartz, H. A., et al. (2017). "Towards the Effective Use of Computational Social Science in the Fight against Online Misinformation."
  8. Tufekci, Z. (2017). Twitter and Tear Gas: The Power and Fragility of Networked Protest.
  9. Zuckerman, E. (2013). Rewire: Digital Cosmopolitans in the Age of Connection.