abstract image of digital connections with brain icon
Milton Wolf Seminar on Media and Diplomacy

The Double-Edged Sword of AI: Threats and Potential Salvations

By Elsa Isaksson

This year’s Milton Wolf Seminar on “Bots, Bombs, and Bilateralism: Evolutions in Media and Diplomacy” focused on the rising complexity and challenges of the contemporary global landscape. Along discussions on several topics, it served as a timely platform for the discussion on the paradoxical nature of artificial intelligence (AI) during a time of growing global tumult. While the contemporary global landscape consists of armed conflicts at an all-time high, increasing reports on attempted election interference, and declining trust in institutions, to mention some, it appears that AI has emerges as a double-edged sword—being capable of both exacerbating and alleviating these issues. This blog post will explore some of the discussions that occurred at the 2024 seminar regarding the multifaceted threats posed by AI while questioning if, paradoxically, AI could also be our savior.

Panel Discussions: Navigating the Paradox that AI holds

Over the past 74 years since Stanford professor John McCarthy first introduced the term, AI has undergone significant transformations and has become a powerful yet contentious force in the global society. In the 2024 Munich Security Report, it is concluded that technology has transitioned from a catalyst of global prosperity to a tool of geopolitical competition. States are now de-risking and weaponizing semiconductor supply chains, advocating for opposing visions of global tech governance, and vying for dominance in AI technology. The 2024 seminar also highlighted the growing concern over AI's role in international affairs. As AI technologies is becoming more and more sophisticated, so does their potential to cause harm, particularly in a world that is already grappling with instability. This year’s seminar featured several panels that, in different ways, touched upon these complex dynamics and offered insights into both the dystopian and utopian potentials of AI.

AI-driven Disinformation

AI has emerged as a very powerful tool used for the dissemination of disinformation and propaganda where states like Russia and China is exploiting the technique to create and spread false information which contributes to a distorted reality where truth risks becoming a casualty. The panel “Disrupted Democracy and the End of Truth” explored case studies that in different ways illustrate the impact that AI-driven disinformation can have on democracies. AI algorithms can generate and amplify false narratives, they can manipulate media content, and exploit various vulnerabilities within different online platforms to spread disinformation at an unprecedented scale and speed.  Furthermore, AI-powered social media bots can also play a crucial role in amplifying this disinformation where they can create the illusion of substantial support or interest in a certain post or topic and thus deceiving users into believing the content is more significance than it is in reality which can fuel partisan debates. Additionally, it has been demonstrated that social bots are proficient in influencing online discussions and boosting the level of support for different political campaigns and social movements artificially.

Another significant aspect of AI- driven disinformation is its ability to create highly realistic fake content. This includes text, images, and videos. Generative AI models, such as deep learning algorithms, can produce very convincing fakes that can be difficult to distinguish from authentic media. This capability gives malicious actors the potential to fabricate news articles, to manipulate images and videos, and to impersonate individuals or organizations, all with the goal of deceiving and manipulating the audience.

Media in Jeopardy

The threats to journalism posed by AI-generated fake news were accounted for in the panel “Media in Jeopardy”. AI-generated fake news that is crafted by malicious actors in order to deceive and distort have the potential to erode the public trust as well as to undermine the credibility of the media. As such, the proliferation of disinformation not only complicates the task of differentiating fact from falsehood but also risks erode the credibility that has traditionally been associated with journalism. This dual impact presents a significant barrier to the role of journalism as a fundamental pillar of knowledge dissemination within society.

The 2024 Super Election Year 2024 is often described as a super election year as more people than ever before in history will vote in elections planned in 76 countries worldwide- including Bangladesh, Brazil, India, Indonesia, Mexico, Pakistan, Russia, US, and the election to the European Parliament. However, this heightened voter activity also comes with significant risks as AI can be used to disrupt elections through sophisticated disinformation campaigns, deepfakes, and targeted social media manipulation. As mentioned above, these tools can create and spread false information rapidly, undermining public trust in the electoral process as well as to influencie voter behavior in harmful and undemocratic ways.

For example, a recent Department of Homeland Security (DHS) bulletin, obtained by CNN, warns that AI tools for creating fake video, audio, and other content will likely provide foreign operatives and domestic extremists with "enhanced opportunities for interference" in the 2024 US election cycle. The bulletin predicts that various "threat actors" will use generative AI to influence or sow discord during the elections. Similar, In Microsoft's Threat Intelligence insight from April 2024, they warn that China is utilizing fake social media accounts to survey voters on their most divisive issues with the aim being to to sow discord and potentially influence the outcome of the U.S. presidential election to their own advantage.

Deepfake technology has been increasingly employed in election-related contexts to try to influence political outcomes over the last years. For example, over 100 deepfake video advertisements impersonating UK Prime Minister Rishi Sunak was being paid for promotion on Meta's platform in the beginning of this year. Other recent examples include a fabricated video of Ukraine's President Zelensky urging troops to surrender during the Russian invasion. Notable examples also include a manipulated video of a Pakistani election candidate urging voters to boycott the election and an AI-generated audio message featuring a fake Joe Biden discouraging voting in the New Hampshire primaries.

Can AI Be a Savior?

Despite these threats, the seminar also featured panels and discussions that explored the potential that comes with AI to address some of the very issues it exacerbates. As such, the questions that was asked was: Can AI save the day?

Potential Benefits of AI

While some panelists put emphasize of the potential risks associated with AI, others remained more optimistic and instead suggesting that through AI, we may also find the solutions to counteract these challenges. In the panel “Can Tech Save the Day?”, both analog and digital solutions to counter the disruptive effects of AI was discussed. This was an occurring theme not only for this panel but in the discussions taking place in several of the other panels including the discussions that followed afterwards.

For example, it was mentioned that AI can enhance decision-making by assisting in analyzing vast amounts of data, enabling better decision-making in crisis situations, and form humanitarian responses to conflict resolution. Moreover, it was discussed if and how AI can help journalists in their work by automating their routine tasks, allowing them to focus on in-depth investigative reporting, and by providing tools to verify information quickly. The ethics on using AI in journalism was also discussed.

Detect and Counter Disinformation

While it is undeniable that AI can be used in an unethically way to propagate falsehoods and spread disinformation, its capabilities also offer a glimmer of hope in combating the very disinformation it may help disseminate. The rise of disinformation has sparked a heightened interest in utilizing AI to detect and combat this phenomenon in the past years and AI's capacity in language processing, pattern recognition, and data analysis can enable it to work through a huge amount of information with the speed and precision that would not be possible for human moderation.

One of the key strengths of AI in this context is its ability to detect subtle patterns and anomalies that might escape human scrutiny. The AI algorithms can then flag the suspicious content and identify potential sources of disinformation by analyzing linguistic cues, social network dynamics, and other digital signals.

As stated in a joint tech accord to combat deceptive use of AI in 2024 election by 20 tech companies and social media platforms including Amazon, Google, Microsoft and Beta,

“AI also offers important opportunities for defenders looking to counter bad actors. It can support rapid detection of deceptive campaigns, enable teams to operate consistently across a wide range of languages, and help scale defenses to stay ahead of the volume that attackers can muster. AI tools can also significantly lower the cost of defense overall, empowering smaller institutions to implement more robust protections. These benefits can help counter adversaries leveraging AI technology.”

Balancing Innovation and Regulation

One of the overarching theme of the discussions on the seminar regarding if AI can save the day was the urgent need for a balanced approach to it. While AI holds great promise, the potential for harm it poses cannot be ignored. Throughout the discussions, the developing of robust regulatory frameworks, promoting ethical standards, and fostering international cooperation emerged as critical steps to ensure that AI contributes positively to global stability.

Global cooperation in the tech sector is increasingly being replaced by geopolitical competition the 2024 Munich Security Report warns. In areas such as semiconductor and AI policy, Chinese and US policymakers are prioritizing outcompeting each other, diminishing the prospects for mutual gains. A clash between democratic and autocratic approaches to digital governance is unfolding, with China, the EU, and the US using digital regulation and infrastructure to promote their differing visions.

Fortunately, in response to the growing concerns over AI, several states and institutions are beginning to implement measures to regulate it. Building on its digital regulation efforts, the European Parliament and EU member states agreed on the new AI Act in December 2023. This act categorizes AI applications based on their risk levels and imposes corresponding restrictions.

In the United States, President Joseph Biden issued an executive order in October 2023 aimed at regulating AI. This order mandates standards for testing new AI systems, with developers required to share their results with the federal government. Additionally, an international AI Safety Summit held in the UK in November 2023 resulted in the Bletchley Park Declaration. This declaration, supported by China, the EU, and the US, commits these entities to cooperate in addressing the risks posed by AI.

AI companies and social media platforms, including major players like Amazon, Google, Microsoft, and Meta, have acknowledged their accountability in this matter. In the tech accord from 20 of these entities, drafted at the Munich Security Conference in February 2024, they outlined various steps in how AI can be used to address democracy-related risks during elections this year including identifying AI-generated content, detecting its distribution, and addressing it while upholding principles of free expression and safety.

In the race for AI leadership, as argued in the 2024 Munich Security Report as well as by several panelists at the 2024 Milton Wolf Seminar, the need for global regulations to mitigate AI risks remains a moral imperative. Thus, states must balance inevitable competition with essential cooperation.

Conclusion

The 2024 Milton Wolf Seminar illuminated the complex relationship between AI and global turmoil and ass we navigate this double-edged sword, it is crucial to harness AI's potential for good while still mitigating its risks. The future of AI is not predetermined; rather, it will be shaped by the collective efforts made to steer its development in a more responsible direction.

References

Munich Security Report, “Lose-Lose?”, February 2024, https://securityconference.org/assets/01_Bilder_Inhalte/03_Medien/02_Publikationen/2024/MSR_2024/MSC_Report_2024_190x250mm_EN_final_240507_DIGITAL.pdf

Emilio Ferrara, Onur Varol, Clayton B. Davis, Filippo Menczer & Alessandro Flammini, “The Rise of Social Bots”, June 2017, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2982515#paper-citations-widget

Zhou Shao, Ruoyan Zhao, Sha Yuan, Ming Ding, Yongli Wang, “Tracing the evolution of AI in the past decade and forecasting the emerging trends”, December 2022, https://www.sciencedirect.com/science/article/abs/pii/S0957417422013732

Fátima C. Carrilho Santos, “Artificial Intelligence in Automated Detection of Disinformation: A Thematic Analysis”, March 2023, 679 https://www.mdpi.com/2673-5172/4/2/43

Nicole Sganga & Kathryn Watson, ”Generative AI poses threat to election security, federal intelligence agencies warn”, May 2024, https://www.cbsnews.com/news/generative-ai-threat-to-election-security-federal-intelligence-agencies-warn/

Microsoft, “Same targets, new playbooks: East Asia threat actors employ unique methods”, April 2024, https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/MTAC-East-Asia-Report.pdf

Fátima C. Carrilho Santos, “Artificial Intelligence in Automated Detection of Disinformation: A Thematic Analysis”, March 2023, 679 https://www.mdpi.com/2673-5172/4/2/43

Munich Security Report, “Lose-Lose?”, February 2024, https://securityconference.org/assets/01_Bilder_Inhalte/03_Medien/02_Publikationen/2024/MSR_2024/MSC_Report_2024_190x250mm_EN_final_240507_DIGITAL.pdf

Lisa O’Carroll, “EU Agrees ‘Historic’ Deal With World’s First Laws to Regulate AI,” The Guardian, December 9, 2023.

The White House, “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” Washington, DC: The White House, October 30, 2023, https://perma.cc/P3LT-MZQC.

Department for Science, Innovation and Technology, Foreign, Commonwealth and Development Office, and Prime Minister’s Office, “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023,” London: Department for Science, Innovation and Technology, Foreign, Commonwealth and Development Office, and Prime Minister’s Office, November 1, 2023, https://perma.cc/P8BC-B2JS.

Munich Security Report, “Lose-Lose?”, February 2024, https://securityconference.org/assets/01_Bilder_Inhalte/03_Medien/02_Publikationen/2024/MSR_2024/MSC_Report_2024_190x250mm_EN_final_240507_DIGITAL.pdf

More in The Double-Edged Sword of AI: Threats and Potential Salvations