Brain icon on digital background
Milton Wolf Seminar on Media and Diplomacy

The Double-Edged Sword of Artificial Intelligence: Navigating the Risks and Rewards

By Neil Fasching

Artificial Intelligence (AI) stands at the forefront of technological innovation, bringing with it unprecedented advancements and challenges that are shaping the fabric of our global society. As we delve deeper into the AI-dominated era, it becomes crucial to understand both its potential to drive progress and the risks it poses to democracy, privacy, and social order. For the 2024 Milton Wolf Seminar on Media and Diplomacy, titled “Bots, Bombs, and Bilateralism: Evolutions in Media and Diplomacy,” we discussed a range of pressing issues that underscore the dualistic nature of AI—highlighting both its disruptive and constructive capabilities. While there was more of a focus on the determinantal potential, there was also discussions on how AI can also be used as a productivity tool for both academics and journalists. In this blog post, I will highlight both the dangers and benefits that AI presents, especially in the realms of media and diplomacy, as explored during the seminar.

Potential Dangers of AI

First, in order to understand the dangers posed by AI, it is important to have a clear definition of what constitutes artificial intelligence. AI encompasses technologies that enable machines to perform tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, making decisions, and learning from data. However, as several researchers argued during the seminar, this broad definition also constrains our ability to foresee and mitigate its associated risks. Further, it remains unclear what constitutes an “expert” in AI, which complicates the regulatory process and the setting of standards for its safe and ethical use. The ambiguity surrounding AI expertise can lead to gaps in governance, where those without sufficient understanding may make crucial decisions about its deployment and control.

Despite this lack of a clear definition, there are several specific examples that highlight the dangers that AI poses. For example, the AI's impact on social movements and democracy is a tale of caution. The use of AI-based moderation tools, as seen in countries like India, exemplifies the pitfalls of relying too heavily on automated systems. These tools, while designed to streamline content moderation on digital platforms, are prone to errors and can inadvertently suppress legitimate expressions under the guise of maintaining decorum (Henshall, 2023). This not only poses a threat to free speech but also undermines the democratic process by filtering what information reaches the public.

Further, there has been intense discussions on the role AI could play in disrupting journalism. Some attendees to the seminar expressed concern over the increasing reliance on AI for generating content. While AI can assist in covering repetitive or data-intensive stories, it also raises questions about the authenticity and depth of reporting. However, the journalists at the seminar argued that this concern is overblown. Much like the advent of television and then the internet, generative AI tools could potentially reshape journalism, rather than diminish it. While generative AI has known problems like hallucinations (Yao et al., 2023) and biases (Feng et al., 2023; Hovy & Prabhumoye, 2021), the key to benefiting from generative AI, they suggested, lies in how these tools are utilized. By automating routine tasks and helping with idea generation, AI may not undermine journalism as others have argued. While generative AI may be used to help write puff pieces, when it comes to investigative journalism, the need for human insight, ethical deliberation, and deep investigative skills remains irreplaceable. AI tools can enhance the efficiency of data analysis and initial drafting, but the core of investigative journalism—which involves nuanced understanding, context, ethical judgment, and a human touch—cannot be fully automated.

The centralization of AI technology in the hands of major tech giants like AWS and Microsoft further complicates the landscape. Despite regulatory attempts, the global nature of these companies often renders local laws ineffective, leading to a quasi-regulatory vacuum where technology outpaces legal frameworks. This scenario is risky as it may grant disproportionate power to corporations over national and international affairs, impacting everything from election integrity to journalistic freedom. This control by major tech giants also influences open-source models, which are supposed to democratize AI but often still depend on proprietary technologies or platforms owned by these corporations. As researchers pointed out during the seminar, major tech companies behind the curve on AI will use the open-source paradigm to reinsert themselves as tech leaders. For example, Meta has publicly released its models (such as Llama 3) under the open-source GPLv3 license. This provides Meta with a strategic advantage, allowing them to set the standards and direction for AI development, while also being slightly behind the technological curve. Further, all these powerful open-source models still require the infrastructure provided by the top tech companies to be properly deployed.

Moreover, the pervasive influence of AI in manipulating information—through deepfakes and selective content amplification—can intensify the fog of war on truth, giving rise to misinformation and further destabilizing fragile democracies. Recent investigative journalism by Steven Lee Myers at the New York Times highlights this. In a recently published article, Lee Myers (2023) found numerous instances where AI-generated content was used to create highly realistic and misleading narratives. These AI tools were deployed to craft fake videos and audio recordings that appeared to show political figures engaging in corrupt or scandalous activities. Such content, though completely fabricated, was spread across social media platforms to manipulate public opinion and influence political outcomes. The sophistication of these AI techniques makes it increasingly difficult for the average viewer to distinguish between genuine and manipulated content, thereby eroding trust in media and governmental institutions.

The Potential Benefits of AI

Despite these significant challenges, AI also offers transformative benefits that can enhance societal functions. In his short speech, Timothy Dorr, a PhD student at the University of Pennsylvania, called upon the seminar participants to not only discuss the possible downsides to AI, but also the potential it has to improve society, especially in the realms of media, academia, and diplomacy.

In terms of academia, the positive are quite clear. Research has shown that generative AI models can help social scientists with a wide range of tasks, such as accurately annotating large bodies of texts (Gilardi, Alizadeh, and Kubli, 2023) and bootstrapping challenging creative generation tasks (Ziems et al., 2024), although there are documented limitations it has or validations it may require (Pangakis, Wolken, and Fasching, 2023). As I emphasized in my short talk, generative AI tools will allow me to process and analyze the content of thousands of political podcasts that would not be possible in years past.

Likewise, AI's capacity to handle large volumes of data can revolutionize fields like journalism and public administration, making information processing and dissemination more efficient. Tools like Google's Pinpoint show how AI can aid journalists by sifting through expansive datasets to uncover stories that might otherwise remain hidden.

Furthermore, the precision and analytical power of AI can be harnessed to combat the very issues it creates. For example, advanced AI can be developed to identify and counteract misinformation, supporting efforts to maintain the integrity of information ecosystems. This "good AI" could potentially safeguard elections, enhance public discourse, and ensure that truth prevails in an increasingly complex media landscape. However, as discussed in the seminar, this presents the ironic situation where we need the tool to fix the problems created by that very tool.

Looking Ahead: Regulation and Transparency

The future of AI necessitates robust regulatory frameworks that can keep pace with technological developments. Discussions about AI transparency—distinguishing between the transparency of training data and the operational mechanics of models—are crucial. For example, there are discussions about concrete definitions for what AI transparency really is and well as the difference between transparency of training data and the transparency of the model itself. These discussions should lead to concrete actions ensuring that AI systems are fair, accountable, and transparent, thereby fostering trust among the public and policymakers alike. While this need clearly exists, the seminar participants also noted the large disconnect between tech companies and the regulators.

Moreover, the notion that only private companies possess the necessary computing power for AI development calls for a reevaluation of resource allocation and public investment in technology. Ensuring that AI development is not solely in corporate hands can help democratize AI and distribute its benefits more equitably. This is easier said than done, however. As these models require greater and greater computational power, the only current entities able to train these models are the big tech companies. The solutions to this problem are not without drawbacks themselves. Having the government regulate these technologies could spell disaster as lawmakers are often ill-equipped to understand the nuances of AI and its rapidly evolving landscape. Additionally, heavy-handed regulations could stifle innovation and deter private investment in new AI technologies. Likewise, relying sole on the tech companies to self-regulate is fraught with risks of conflict of interest, as these companies may prioritize profitability over public good. This situation underscores the need for a balanced approach that includes a hybrid model of regulation—one that involves a combination of government oversight, industry self-regulation, and active participation from independent third parties such as academics and non-profit organizations.

Conclusion

As we stand at this technological crossroads, the 2024 Milton Wolf Seminar's reflections on AI provide a valuable framework for understanding and navigating the complexities of this new era. By balancing the innovative potential of AI with vigilant oversight and ethical considerations, we can harness its power while mitigating its risks, ensuring that AI serves as a tool for societal enhancement rather than a catalyst for disorder.

References

Feng, S., Park, C. Y., Liu, Y., & Tsvetkov, Y. (2023). From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models (Version 3). Version 3. arXiv. https://doi.org/10.48550/ARXIV.2305.08283

Henshall, W. (2023, October 4). Global internet freedom declines, aided by AI. Time. https://time.com/6319723/global-internet-freedom-decline-2023/

Hovy, D., & Prabhumoye, S. (2021, August). Five sources of bias in natural language processing. Language and Linguistics Compass. Retrieved from https://doi.org/10.1111/lnc3.12432

Pangakis, N., Wolken, S. and Fasching, N. (2023). Automated Annotation with Generative AI Requires Validation (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2306.00176

Yao, J.-Y., Ning, K.-P., Liu, Z.-H., Ning, M.-N. and Yuan, L. (2023). LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2310.01469

Ziems, C., Held, W., Shaikh, O., Chen, J., Zhang, Z., & Yang, D. (2024). Can large language models transform computational social science? Computational Linguistics, 50(1), 237-291. https://doi.org/10.1162/coli_a_00502

More in The Double-Edged Sword of Artificial Intelligence: Navigating the Risks and Rewards