abstract image of digital connections with brain icon
Milton Wolf Seminar on Media and Diplomacy

The problems AI constructs: navigating claims of ‘AI use’ in conflict and media

By Glen Berman, Masters of Applied Cybernetics, Ph.D. candidate at the Australian National University

The 2024 Milton Wolf seminar theme—bots, bombs, and bilateralism—draws attention to the relationship between emerging technologies and conflict. Conflict, in seminar discussions, was primarily understood through the prism of state actors and the edifice of international relations. Emerging technologies, meanwhile, were primarily understood through notions of Artificial Intelligence (AI). Thus, recurring concerns in seminar discussions included: the use of AI in current conflicts, particularly Russia’s invasion of Ukraine and Israel’s assault of Gaza; the impact of state-supported, AI-empowered mis- and dis-information campaigns on public understandings of these conflicts; the use of AI to enhance state surveillance powers; and, the potential of cheap AI systems to exacerbate asymmetrical conflicts between states and non-state actors, such the conflict between Houthi rebels and Western shipping interests. As this list of concerns indicates, however, ‘AI’ was an amorphous concept throughout discussions: in what ways is the AI used by the Israeli military similar to the AI used in mis- and dis-information campaigns or to the AI that various police forces are adopting to track activist activities?

In this blog post I want to reflect on some of the shared political commitments that are revealed by how ‘AI’ appears in these recurring concerns about emerging technologies and conflict. These political commitments are not necessarily explicit, or even coherent, but they speak to a particular understanding of the world and conflict, and help enable the domains of state violence and war, mis- and dis-information, surveillance, and weapons development to be brought into relation with each other through discussion about the use of ‘AI’. My reflections are informed, in part, by conversations with discussants at Milton Wolf, and, in part, by recent Science and Technology Studies (STS) research that calls for a situated understanding of the performance of AI (Jaton and Sormani 2023)—that is, of the work that is involved in discursively and materially intertwining the “figure of AI” with particular sociotechnical systems. In this sense, the “figure of AI” helps us gain an understanding of how particular actors who mobilize the term understand the nature of the conflicts they are active in (Suchman 2023b)—but this figure also risks us making unfounded assumptions about the underlying technological advances that are implicated in these conflicts. Recognizing this is critical for news media reporting on ‘AI’, and for diplomats and public policy makers interested in regulating the use of ‘AI’.

Searching for an AI referent

On 8 February 2024, TIME declared the “first AI war”, publishing a detailed account of how US-based technology firm Palantir is providing Ukraine agencies with its software, “which uses AI to analyze satellite imagery, open-source data, drone footage, and reports from the ground to present commanders with military options” (Bergengruen 2024). Two months later, in early April, +972 Magazine and Local Call published an investigation which found that, “the Israeli army has developed an artificial intelligence-based program known as ‘Lavender’… to generate targets for assassination” (Abraham 2024). In both publications, these statements are supported by quotes from government, commercial, or military representatives—it is these representatives who use the figure of AI to describe their technological initiatives. But what, precisely, is this technology?

The TIME article is relatively sparse on technical detail. The article highlights that ‘AI’ is responsible for identifying targets for military strikes. “AI-enabled” models are fed streams of intelligence data from many sources, and use this data to predict enemy positions and identify potential targets. The models, the article notes, “learn and improve with each strike”. What makes these models AI-enabled? Three potential characteristics stand out: the ability to ingest large volumes of heterogenous data; the ability to produce actionable predictions; and, the ability to be dynamically updated based on new data. Note, however, that these characteristics are not tied to the model itself: ‘AI-enabled’ models may be state-of-the-art neural networks, but they may also be simple linear regression models, or rules-based deterministic models, or any other form of model that is capable of condensing a large number of inputs into a single, actionable output. And, as the +972 expose of Israel’s Lavender program demonstrates there is very little about these characteristics that, as Kate Crawford and others have argued, is ‘artificial’ or ‘intelligent’ (Crawford 2021; Jaton and Sormani 2023).

The +972 expose provides somewhat greater technical detail on Israel’s use of ‘AI’ to identify and target suspected militant operatives in Gaza. The Lavender system leverages the mass surveillance system Israel has developed in Gaza. The core of the system is a model, which uses characteristics of known Hamas operatives to predict the likelihood of a person being a Hamas operative. More specifically, the model provides real-time scores between 1 and 100 for almost all Palestinians in Gaza, where 100 represents complete confidence that the person is a militant. Since 7 October, the Lavender system has been used to generate automated target lists for aerial bombing. Yet the system is known to only be 90% accurate, and much of its input data is unreliable. For example, the system associates each mobile phone number surveilled with one person, even though a phone might be shared among a family, or passed from one person to another. Additionally, this ‘automation’ is fundamentally human: it is dependent on Israeli military officials determining a threshold for when to label a predicted militant as a military target (i.e., what Lavender score is high enough?) and, similarly, on Israeli officials determining that it is justifiable to kill targets when they are in their family homes, at night (i.e., when they are most likely to be with their family). Indeed, the +972 article includes quotes from military insiders who state that when faced with pressure to produce more bombing targets, they lowered the Lavender thresholds. What work does the figure of AI achieve when applied to systems such as Lavender and the conflicts in Ukraine and Gaza?

The work of legitimization

‘AI’ is discursively mobilized by the Ukrainian and Israeli governments to bolster their political aims. In the case of Ukraine, the TIME article reports that Ukrainian officials were inundated with offers of computational services from US-based technology firms during the opening months of the war, leading officials to determine to “build a tech sector that could not just help win the war but also serve as a pillar of Ukraine’s economy” (Bergengruen 2024). Indeed, these officials cite the close relationship between startups and the military in Israel as a template. Here, it is the economic potential represented by ‘AI’ that is of importance: the Ukrainian government’s claims of using AI in the battlefield help to bolster their efforts to demonstrate that they are planning for Ukraine’s recovery.

Israel, historically, has positioned itself as a hub for high-tech research and commercialization, particularly in cybersecurity and surveillance. This positioning has served dual political aims: domestically, it supported Israeli military claims of control over potential security threats in Gaza and the West Bank; internationally, particularly through the outsourcing of technology-driven surveillance work to startup firms, it supported Israeli government efforts to avoid responsibility for human rights violations (Zureik 2020). In the context of the post-October assault on Gaza, Israel’s clams of using ‘AI’ represent an attempt at normalization of their war conduct: the claims represent an attempt to situate the conflict within the context of other high-tech wars, which have generally received Western support, such as Ukraine’s ongoing defense against Russia. However, as the reaction to the +972 investigation indicates (e.g., Tharoor 2024) these normalization attempts have largely failed.

The promise of closure and containment

Lucy Suchman considers the figure of AI within the context of the United States’ Department of Defense (DoD) and their automation efforts (Suchman 2023a). Suchman’s analysis of the development of Project Maven (a flagship DoD ‘AI’ project) highlights the DoD’s objective of control of all contingencies (i.e., closure) in a conflict zone. Within a colonial frame, which perceives threats as originating from outside the U.S., the tactical objective of closure  supports a broader geopolitical goal: containment of threats. Suchman’s analysis demonstrates how the DoD understands the promise of ‘AI’ to be a critical component of its closure and containment strategy. Here, the promise is the objective and automated translation of signals (i.e., events on the ground) to data (e.g., drone video footage) to information (i.e., actionable target lists). The promise is false, however, because distilling signals from noise (i.e., significant events from insignificant events) depends on situated knowledge, which automated statistical processing erases. The impact of this erasure is that the signal-noise distinction becomes grossly simplified, leading, for example, to vast expansions in the number of people identified by automated systems as ‘combatants’ rather than civilians.

Within military contexts, then, the figure of AI helps bolster existing commitments to strategies of closure and containment. The figure of AI helps foster an understanding of war and conflict as winnable through information acquisition and processing, offering interventionist governments a path toward geopolitical influence that does not depend on the politically unviable strategy of boots-on-the-ground. And, in a time when governments are struggling to justify increases in defense spending, the figure of AI also works to reframe military expenditure as investment in national research and technology competitiveness. Given this, as Suchman argues, it is notable that the DoD’s Project Maven has been designed with the assumption that the commercial sector currently leads the government in AI development.

Uniting commercial and military logics

The three characteristics shared by Israel’s Lavender, Ukraine’s AI-enabled models, and Project Maven—ingesting large amounts of data; producing actionable predictions; ‘learning’ through new data—are, arguably, the defining characteristics of the current generation of computing techniques that “travel under the sign of AI” (Suchman 2023b, 4). As discussed during the 2024 Milton Wolf seminar, these computing techniques rely on and extend the cloud computing infrastructure maintained by large technology firms (Cobbe, Veale, and Singh 2023). As such, an additional characteristic of ‘AI’ use in conflict is the central role of technology firms and of the commercial logics they share.

The commercial logics underpinning AI solutions include: the delivery of services through cloud-based platforms; the translation of public and third-party data into privately held predictive models and training datasets; a deployment strategy focused on outrunning regulators; the abrogation of responsibility for experimental technologies to startups and vulnerable communities; and, widespread use of crowdsourced labor to support automation efforts. Taken together, these logics reflect a particular set of political commitments. So-called ‘planetary scale’ platforms, in which regional or interpersonal distinctions are flattened and all people are reduced to ‘users’ reflects a commitment to acontextuality and universality (we should be able to offer the same service, everywhere). Predictive models trained on historical behaviour data reflects a commitment to accepting correlation as a replacement for causation. The use of crowdsourced labor, located largely in developing countries, and the deployment of experimental technologies in vulnerable communities reflects commitments to maintaining inequitable colonial systems of privilege and wealth (Tacheva and Ramasubramanian 2023).

These commercial logics and their corresponding political commitments neatly align with the commitments of military actors. The United States military, for example, has used conflicts in the Middle East and Africa and its military partners to experiment and refine new technologies—a colonial practice (Suchman 2023a) echoed by US technology firms, who use low regulation environments and startup partners to experiment. As such, the figure of AI helps to legitimize the role of corporate actors in conflict, and works to collapse technical, moral and legal distinctions between the computational act of generating predictions of, say, music taste with the computational act of generating a kill-list. What then, ought those in media and diplomacy do when confronted by claims that ‘AI’ is being put to use?

Resisting the figure of AI

Discussions during the Milton Wolf seminar surfaced three critical strategies for resisting, or at least challenging, the figure of AI as it is often presented by state and commercial actors in conflict.

First, is the need to resist the conceptual terms we inherit, both from large technology firms and state actors. Mila Bajić’s research on ‘information chaos’ provides a clear example of this in the mis- and dis-information domain. Information chaos, argues Bajić, provides a much clearer conceptual frame for understanding the coordinated digital propaganda and legal intimidation strategies adopted by far-right populists than the more narrow framing of mis-information (Bajić and Baker 2023).

Second, is the need to understand the sociotechnical assemblages that claims of ‘AI use’ refer to. Crucially here, is the need to re-insert the social, to understand the role that human decision making plays in even the most automated systems. Jennifer Cobbe’s study of the AI supply chain, for example, tracks the complex web of digital platforms and computing infrastructures that must be negotiated and coordinated to provide an ‘AI’ service, and in doing so highlights how technology firms structure this supply chain so as to avoid accountability (Cobbe, Veale, and Singh 2023).

Finally, is the need to form interdisciplinary and intersectoral alliances, particularly across the computing and social sciences and civil society, to help develop shared understandings of the algorithmic models which ingest data flows and output actionable predictions—this understanding is critical for, as Jenna Burrell describes it, overcoming the opacity of machine learning algorithms (Burrell 2016). Here, the Milton Wolf seminars themselves may provide a promising path forward.

References

Abraham, Yuval. 2024. “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza.” +972 Magazine, April 3, 2024. https://www.972mag.com/lavender-ai-israeli-army-gaza/.

Bajić, Mila, and Grant Baker. 2023. “‘Information Chaos’ Plagues Serbia Elections.” Center for European Policy Analysis. https://cepa.org/article/information-chaos-plagues-serbia-elections/.

Bergengruen, Vera. 2024. “How Tech Giants Turned Ukraine Into an AI War Lab.” TIME, February 8, 2024. https://time.com/6691662/ai-ukraine-war-palantir/.

Burrell, Jenna. 2016. “How the Machine ‘thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data and Society 3 (1): 1–12. https://doi.org/10.1177/2053951715622512.

Cobbe, Jennifer, Michael Veale, and Jatinder Singh. 2023. “Understanding Accountability in Algorithmic Supply Chains.” In 2023 ACM Conference on Fairness, Accountability, and Transparency, 1186–97. Chicago IL USA: ACM. https://doi.org/10.1145/3593013.3594073.

Crawford, Kate. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.

Jaton, Florian, and Philippe Sormani. 2023. “Enabling ‘AI’? The Situated Production of Commensurabilities.” Social Studies of Science 53 (5): 625–34. https://doi.org/10.1177/03063127231194591.

Suchman, Lucy. 2023a. “Imaginaries of Omniscience: Automating Intelligence in the US Department of Defense.” Social Studies of Science 53 (5): 761–86. https://doi.org/10.1177/03063127221104938.

———. 2023b. “The Uncontroversial ‘Thingness’ of AI.” Big Data & Society 10 (2): 20539517231206794. https://doi.org/10.1177/20539517231206794.

Tacheva, Jasmina, and Srividya Ramasubramanian. 2023. “AI Empire: Unraveling the Interlocking Systems of Oppression in Generative AI’s Global Order.” Big Data & Society 10 (2): 20539517231219241. https://doi.org/10.1177/20539517231219241.

Tharoor, Ishaan. 2024. “Israel Offers a Glimpse into the Terrifying World of Military AI.” Washington Post, April 5, 2024. https://www.washingtonpost.com/world/2024/04/05/israel-idf-lavender-ai-militarytarget/.

Zureik, Elia. 2020. “Settler Colonialism, Neoliberalism and Cyber Surveillance: The Case of Israel.” Middle East Critique 29 (2): 219–35. https://doi.org/10.1080/19436149.2020.1732043.

More in The problems AI constructs: navigating claims of ‘AI use’ in conflict and media