A network of blue dots and lines on dark background
Milton Wolf Seminar on Media and Diplomacy

A Hydra in Sheep's Clothing? The Many Faced Challenges of AI

By Natasha Williams

Talk of artificial intelligence (AI) appears ubiquitous nowadays. Pervasive in circles across academia, journalism, and policymaking, AI has broadly surfaced as a hot button issue and a prolific resource for average individuals and geopolitical actors alike. AI’s increasingly commonplace nature, however, and the manifold ways in which it continually saturates our everyday life worlds, remains the crux of both the promises and problems that it represents. Unpacking exactly what we mean when we talk about AI – namely, attending to the many technologies, complications, and valences that the term serves to encompass – and disentangling the matrix of concerns at stake, persists as a first mover challenge and necessary point of clarification for those with a vested interest in the issue space. Disaggregating the patchwork of AI severely warrants our attention and disambiguation, as a foreground to adequate policymaking and scholarly research on the subject moving forward.

Conceptually, AI as a term is loaded with ambiguity, simultaneously representing many different things to many different audiences. On the one hand, AI’s role as a catch-all phrase across various fields, ranging from computer science, to trust and safety, to security, serves as a convenience to capture emerging technology developments that remain largely inaccessible to the average technologically savvy individual. On the other hand, as a vague and generously applied label of technobabble jargon, AI commands a level of unwitting and unscrutinized authority among the public, which impedes its meaningful dissection and examination in the public discourse. Moreover, the many-layered issues that come to be lumped together under the umbrella of AI – from ChatGPT to autonomous drones – consequently evade satisfactory and comprehensive address by policymakers and officials, given their frankenstein kinship among AI’s mosaic issue landscape.

As the conversations at this year’s Milton Wolf seminar revealed, charting the muddy waters of AI’s issue space presents a challenge to both policymakers and scholars. How do we establish a meaningful intellectual and regulatory space when platform algorithms, weapons systems, chatbots, and surveillance tools all exist as unlikely bedfellows in the same bucket of concerns? AI lacks a clearly defined issue space, with heterogeneous problems often becoming obfuscated through their amalgamation under a homogenized set of computational concerns. In turn, diffuse attention is awarded to the myriad puzzles at play, given the cryptic and frequently amorphous popular understandings of AI. The breadth of policy issues posed thus remain incomprehensively addressed, as regulators and scholars attempt to wrangle with AI’s emergent and multifaceted challenges as a wolf in sheep’s clothing, when the dilemma actually has many more faces than one.

Through this reflection, I aim to elucidate and disaggregate some of the major conceptual opacities surrounding AI, and to reveal the array of challenges underlying this widely used term that need to be addressed by policymakers and scholars. While the best approach for doing so remains to be tackled by the experts, mapping the issue landscape provides a preliminary way forward in navigating a world ever-more defined by ‘artificial intelligence.’ Specifically, I aim to highlight four areas pertaining to the AI issue space that call for our greater attention and clarification:

  1. What – What are we talking about when we talk about AI?
  2. Who – Who are the relevant actors and stakeholders involved when we talk about AI?
  3. How – How do we address issues of concern pertaining to AI?
  4. Why – Why do we need to be critically attuned toward AI?

The What

What are we talking about when we talk about AI? The enigma of what constitutes AI and its relevant issue space was threaded throughout this year’s Milton Wolf seminar. From their insights across a variety of fields, the diverse experts present revealed that the complexities yielded by AI remain pervasive and manifold. Initially, AI was problematized in what is perhaps its most conventional public understanding – as the problem child of platform-based algorithms. Through the lens of social media, AI’s most glaring concerns manifested in discussions around its potential to perpetuate harmful biases via algorithmic profiling, to disseminate mis- and disinformation, and to polarize electorates into political echo chambers online. Relatedly, similar AI-driven challenges emerged in conversations over assistance tools such as ChatGPT, where discussants contested the utility and built-in biases of such applications, which significantly leverage algorithmically-trained learning capabilities. The core issues that came to the fore in this first vein of concern surrounded the potentials and limits of information provision posed by AI, and consequently, the impacts of algorithmically shaped information exposure on the broader public sphere.

The issues surrounding AI, however, increasingly became much more complicated throughout the panel discussions. Beyond the platform audience, the precarity underlying AI’s information provision came to its most contentious manifestation in conversations around the Israeli army’s “Lavender” program, an AI-driven bomb targeting system exposed by the investigative journalism of +972 Magazine.  Run with “little human oversight,” the Lavender AI program autonomously generated kill lists for the Israeli army of suspected Hamas and Palestinian Islamic Jihad affiliates, which were used in assassination offensives carried out by the army against Palestinians.  Alarmingly, Lavender’s potentially named suspect lists were motivationally employed in army strikes “‘as if [they] were a human decision,’” with Israeli officials purportedly giving “sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.”  Most concerningly, these AI systems led to the coordinated targeting of suspected Palestinian militants at their family homes, leading to the consequential assassination of countless children and innocent civilians.

The physical risks posed by autonomous AI systems in implicating certain populations as vulnerable to state violence were further echoed in later panel discussions surrounding the role of Pegasus Spyware. Pegasus notoriously serves as a tool of government surveillance, officially said to identify threats to the state, but increasingly revealed as a state tool to track and spy on dissidents, investigative journalists, and activists.  Pegasus’ illegal use in surveilling individuals has been noted as abetting human rights violations.  Following the problematics raised by both Lavender and Pegasus, AI’s potential reach for harm culminated in discussions over an AI future postulated by seminar participants, one that I coin as the butterfly scenario. The butterfly scenario was as threatening as it was innocuous – in musing over a future shaped by AI, seminar participants hypothesized the eventual development of autonomous weapons technology as guileless in appearance as a butterfly, but with the unsupervised capability of carrying out precision strikes for militarized groups. Such a future, troublingly, appears not too far off from the realm of possibility.

The conversations at the Milton Wolf seminar revealed a plethora of seemingly related, yet vastly disparate, concerns pertaining to AI as a scholarly and regulatory subject. Disaggregating the AI issue bucket in piecemeal thus proves paramount, with the need to disambiguate AI as a conceptual problem standing as a precursor to productively moving forward any academic and policy-making discussions. At present, the AI issue space conflates and amasses a series of matters that, while linked, would perhaps be better off and more efficiently addressed if parsed on a continuum or in a typology of problematics.

The Who

Who are the relevant actors and stakeholders involved when we talk about AI? While the stakeholders may seem obvious, AI involves a number of seen and unseen parties. Beyond the academic, policy, and journalistic circles aiming to examine it, AI engages participants across governments, militaries, private corporations and interest groups, everyday laypersons, and so on. Having a clear grasp on the vested actors and interests structuring AI’s patchwork landscape proves important in untangling the varied incentives driving both AI’s technological production and further adoption. From a policymaking standpoint, this is a necessary foreground to implementing regulations aimed at bringing meaningful oversight to AI development. Moreover, from a scholarly point of view, vetting the various strategic actors with a hand in AI advancement aids in comprehensive research theoretically unpacking the phenomenon. Furthermore, clearly outlining the network of AI stakeholders allows for both accountability and ethical considerations to be built back into systems that evade comprehensive human oversight through their autonomous functioning.

The How

How do we address issues of concern pertaining to AI? Clarifying the relevant stakeholders provides a starting point for meaningful governance approaches to AI. From a regulatory perspective, tackling AI’s issues involves a number of factors to consider. For one, how should the onus of regulation proceed from a public versus private standpoint? To what extent do private corporations have a responsibility to ethically self-regulate their technological developments, algorithms, and AI-driven systems? How might governments and legal practitioners begin to address the myriad concerns posed not only by private platforms, but also by militarized and state entities? Additionally, how should AI regulation engage international oversight or transnational coordination? Should there be some form of independent AI supervisory body? Or perhaps, as the online and offline harms posed by AI become increasingly pernicious, should there be a move toward agreements on AI nonproliferation? These questions represent only some of the provocations and reflections raised among discussions at the Milton Wolf seminar, and present an initial tracing of needed policymaking considerations moving forward.

Additionally, it is important to acknowledge that throughout the seminar the negative consequences of AI did not go unchallenged or unqualified by seminar participants. Various interlocutors rightfully raised the many benefits offered by AI, such as in automated content moderation, in the identification of disinformation and bot accounts online, and in the provision of accessible learning aids. Such insights make clear that we must also chart a means for positively working with AI, despite the issues at stake, as it will likely remain a feature of our technological landscape long into the future. Thus, these tensions surrounding AI’s valences for both positive and negative potential suggest a need for ongoing scholarly and policy approaches that balance the evident harms alongside the substantial benefits that AI can computationally offer to human efforts.

The Why

Why do we need to be critically attuned toward AI? As a final point, I reflect on why the exercise of disaggregating AI and its relevant issue space proves worthwhile and of importance. Ultimately, there is a strong need to scrutinize the discursive and conceptual construction of AI, given the embedded notions of accuracy and truth-value many of its associated systems are often predicated on. As AI increasingly underpins many of the algorithms structuring most digitally connected individuals’ exposure to news and information online, as well as their means for widespread human interaction, AI wields an enormous power to mislead, opinion shape, and unwittingly harm and divide. The autonomous systems underlying AI further obfuscate their influential roles through their implementation in user interfaces that operate under a self-evident guise of mundanity, routineness, and everyday convenience.

Finally, the complexities characterizing AI warrant our attention and awareness given that the problems raised remain agnostic to regime-type and society. The harms of AI persist across autocracies and democracies alike, and perhaps exist even more so among technological strongholds of the West. It is thus vital to remain critically attuned to the Western neoliberalism, techno-utopianism, and capitalist interest that couch much of the conversation surrounding AI’s potential within the public discourse. Western democracies remain far from immune to the issues at play. The risks associated with AI technology – from algorithmic perpetuation of disinformation, to autonomous weapons, to surveillance systems – do not exist purely in the context of far-off foreign adversaries. These challenges lay right in our own backyard, and they are only intensifying.

More in A Hydra in Sheep's Clothing? The Many Faced Challenges of AI