abstract image of digital connections with brain icon
Milton Wolf Seminar on Media and Diplomacy

AI: The New Paradigm of War

By Arsalan A. Satti

Introduction

As the world was recovering from the effects of COVID-19 in the late 2020s, a conflict was unfolding in a relatively forgotten region in the South Caucasus between Armenia and Azerbaijan. The second Nagorno- Karabakh War (27 September 2020 - 10 November 2020) was a watershed moment for redefining old borders within a brief contemporary conflict but more importantly gave the world a snapshot of how AI is transforming the battlefield, and how militaries across the globe from superpowers to fringe states and even non state actors will now be relying on AI for both victory and survival. When we think of about AI’s transcendence in the battlefield its primarily manifested in drones, lethal autonomous weapons, data processing algorithms and AI powered weapons systems. In Nagorno-Karabakh it was loitering munitions or “Kamikaze drones” that were instrumental in helping Azerbaijan achieve victory (Barton, 2021). One such drone was the Israeli made “Harop” which debuted in this war on the Azerbaijani side and tilted the war in their favour. These drones loitered the battlefield and provided a live feed of the target and struck military targets in a fire and forget fashion (Israel Aerospace Industries, n.d.). These live feeds instantly uploaded by Azerbaijani officials on social media platforms wreaked both physical and moral havoc on the Armenians. (See one such video here). According to an independent Israeli Defence Analyst from the IDF Tactical Command College (Hecht, 2022), 45% of all targets destroyed by Azerbaijan military in this war were from drones. The enduring lesson of this war is clear: AI's pivotal role in shaping future conflicts is epoch defining.

AI: Revolutionising Military Decision-Making

AI is not limited to turbocharging drones but has its most significant application in the Military Decision-Making Process. The vast swath of data coming from sensors in the battlefield (intelligence, surveillance, and reconnaissance) had made it increasingly difficult for analyst and military commanders to reach a decision without a considerable lag and inherent chance of error. However, AI’s advanced algorithm tackles this with a remarkable finesse and at a fraction of time. Case in point is Ukraine front, where US based AI tech company Palantir Technologies showcased their AI powered “MetaConstellation” software. It can incorporate sensor data from military sensors, commercial satellites, and open-source intelligence into their unique AI model “Edge AI” to quickly process imagery, detect and geolocate objects, and assess changes over time, enabling rapid decision-making and response by reducing the sensor to shooter cycle. This integration also enables continuous refinement and improvement of models, ensuring more accurate and timely insights (Palantir, n.d.). Similarly, soldiers and commanders on ground can use handheld tablets to request additional coverage, supported by collaboration between Western military, intelligence services, and Ukrainian ground forces. Realising the vast importance of AI in this David versus Goliath battle with Russia, Ukraine's cutting-edge AI tech has afforded them a significant advantage. Each Ukrainian battalion today includes a software developer, and the country has developed its own military tech systems. These include “eVorog,” a secure chat system for civilian reports, an "Army of Drones" for reconnaissance, and the Delta battlefield mapping system that provides real-time data for military planning (Ignatius, 2022). Starlink, with its network of approximately 2,500 satellites, provides crucial broadband connectivity, enabling quick upload and download of intelligence and targeting data for Ukrainian soldiers.

The advantage of AI in the battlefield lies not only in its tactical capabilities, but in their affordability, simplicity, and availability compared to expensive and sophisticated weapon systems. States and organisations that couldn’t once afford a fully equipped air force can now acquire a comparable capability through AI powered drones, advanced AI models, and autonomous weapons while not as comprehensive or powerful, still offers significant operational and strategic dividends.

Ethical Dilemmas

However, this has all come at a cost whilst AI Algorithms and data enabled AI models significantly reduce the sensor to shooter cycle, they have come under severe criticism in the recent Israel-Palestine conflict, when mainstream newspapers ran headlines that it is AI that now decides who lives and who dies (McKernan & Davies, 2024). As per reports, Israel used an AI powered software called Lavendar especially during the initial stages of the war, which was essentially an AI-generated kill lists without thoroughly verifying the machine's choices or examining the underlying intelligence data. According to these reports, military personnel often spent only about 20 seconds per target, primarily to confirm the target was male, before authorising bombings. Despite knowing the system had an error rate of about 10%, including mistakenly marking individuals with loose or no connections to militant groups, the army used it extensively (Abraham, 2024; McKernan & Davies, 2024). The IDF thus targeted individuals in their homes, typically at night, when entire families were present, because it was easier to locate them there.

The IDF quickly refuted these claims, by saying that IDF does not use AI to identify or predict terrorists but uses databases to cross-reference intelligence. According to them, each target is individually assessed for military advantage and potential collateral damage, ensuring strikes comply with international law (The Guardian, 2024). Notwithstanding the Israeli response, this instance and countless others have sparked serious ethical question on the use of AI in future conflicts especially when they might be in possession of authoritarian states, terrorist organizations and nonstate actors that do not ascribe to international ethical norms.

Regardless of the ethical debate and existing codes of conduct of China and the USA that limit AI’s military use within the confines of international humanitarian law and law of armed conflict (Stewart & Hinds, 2023), it seems unlikely that in the future the full-scale military application of AI will be limited, especially once faced with a threat of unabated use of autonomous AI in an asymmetrical warfare from disruptors. A delayed decision-making loop can cost a war even to the mightiest of armies.

The Future Battle for AI Supremacy

The global pursuit of AI technology and its military applications resembles a gold rush, drawing the interest of not only major world powers but also smaller states, corporations, and even tech startups. This scramble reflects the desire to acquire coveted AI capabilities that serve respective interests. However, amidst this fervour, strategic competition between China and the US remains paramount. Additionally, potential disruptors such as Russia, North Korea, and Iran add complexity to the international landscape.

China is extensively researching and developing military AI, focusing on seven key areas: intelligent vehicles, intelligence and surveillance, predictive maintenance, electronic warfare, simulation, command and control, and automated target recognition (Murphy et al., 2021). Despite limited public information on the People Liberation Army’s deployment of AI systems, these areas indicate significant investment. AI's application in uncrewed systems varies from remote-controlled to fully autonomous, with China advancing in drone swarming and developing systems like the FH-97A, akin to the U.S. "loyal wingman" concept where an autonomous aircraft flies in a team alongside a crewed aircraft (Stewart, 2023). The use of AI in cyber operations is already relatively advanced and is likely to become even more sophisticated over time. Enhanced offensive cyber capabilities will support a comprehensive range of cyber operations, including "adversarial AI," which involves efforts to disrupt the opponent's AI systems (Vassilev et al., 2024). The PLA is also integrating AI in battlefield functions such as predictive maintenance and ISR, leveraging domestic expertise in biometrics and image recognition. AI in command, control, and communications (C3) is maturing, with potential applications in cyber operations and decision-making processes.

China’s military AI development could shift the military balance in the future, improve their conventional combat capabilities, whilst posing strategic risks, including escalation due to autonomous systems and challenges to US nuclear deterrence. Similarly, there is little transparency regarding its modernization efforts for AI, which could potentially lead to strategic surprises in the future for the US (Stokes, 2024).

Contrary to the Chinese, Russians military application centres on Information Warfare wherein AI is figures out as a pivotal tool. Rather than focusing solely on military technology, they emphasize AI's potential in shaping information at the strategic level. This includes leveraging disinformation to influence politics and societies (Thornton & Miron, 2020). However, this vision extends beyond mere influence; AI-enabled information warfare aims at creating large-scale instability within adversaries by manipulating information and cyber impacts. In Russian military doctrine, cyber warfare falls under information warfare, and AI's integration into cyber operations poses significant strategic threats for the US and its allies, that potential AI threat is not just about drones and lethal autonomous weapons but also about cyber weapons.

Iran and North Korea have both demonstrated keen interest in harnessing artificial intelligence (AI) for military purposes, albeit riddled with deficiencies and roadblocks. Iran has actively invested in AI research and development within its military sphere, primarily aiming to bolster its defence capabilities in face of potential cyber-attacks (cite). Meanwhile, North Korea's AI landscape remains elusive due to its secretive nature and international sanctions. However, available information suggests active development efforts across government, academia, and industry. North Korean researchers have applied AI and machine learning (ML) in sensitive areas such as wargaming and surveillance, aligning with the country's focus on "informatization" as a core economic endeavour (Kim, 2024). Despite hardware procurement challenges due to sanctions, North Korea remains committed to AI development.

Conclusion

In conclusion, as conflicts evolve in the 21st century, the integration of AI into military operations has become increasingly prevalent, reshaping the character of warfare. The Second Nagorno-Karabakh War served as a stark reminder of AI's transformative role on the battlefield even in fringe wars, with drones and AI-powered decision-making systems playing pivotal roles in shaping the outcome of conflicts. While AI offers unparalleled advantages in terms of speed, precision, and affordability, its deployment raises significant ethical concerns, particularly regarding the potential for autonomous systems to undermine humanitarian principles and international norms. The dichotomy between technological advancement and ethical considerations underscores the urgent need for robust regulations and accountability mechanisms to govern the use of AI in warfare.

As famous historian Harari aptly said in a recent conversation (Harari & Suleyman, 2023), the rise of AI heralds the end of human-dominated history, ushering in an era where control may shift to non-human entities. This paradigm shift underscores the importance of measures to ensure that AI remains a tool for human progress rather than a force that we pass on the baton to. Despite existing codes of conduct and regulations, the allure of AI supremacy in military affairs will continue to drive intense competition among global powers, and other nations vying for dominance in this critical domain.

References

Abraham, Y. (2024). ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. 972mag. [online] Available at: https://www.972mag.com/lavender-ai-israeli-army-gaza/ [Accessed 13 May 2024].

Barton, R. (2021). Loitering Menace. ADBR. [online] Available at: https://adbr.com.au/loitering-menace/ [Accessed 13 May 2024].

Davies, H. and McKernan, B. (2024). IDF colonel discusses ‘data science magic powder’ for locating terrorists. The Guardian. [online] Available at: https://www.theguardian.com/world/2024/apr/11/idf-colonel-discusses-data-science-magic-powder-for-locating-terrorists?ref=upstract.com [Accessed 13 May 2024].

Fedasiuk, R., Melot, J., & Murphy, B. (2021). Harnessed Lightning: How the Chinese Military Is Adopting Artificial Intelligence. Center for Security and Emerging Technology. Available at: https://cset.georgetown.edu/wp-content/uploads/CSET-Harnessed-Lightning.pdf [Accessed 10 May 2024].

Harari, Y.N., & Suleyman, M. (2023). "Yuval Noah Harari and Mustafa Suleyman on the future of AI." The Economist. Available at: https://www.economist.com/films/2023/09/14/yuval-noah-harari-and-mustafa-suleyman-on-the-future-of-ai [Accessed 13 May 2024].

Hecht, Eado, “Drones in the Nagorno-Karabakh War: Analyzing the Data,” Military Strategy Magazine, Volume 7, Issue 4, winter 2022, pages 31-37.

Ignatius, D. (2022). How the algorithm tipped the balance in Ukraine. The Washington Post. [online] Available at: https://www.washingtonpost.com/opinions/2022/12/19/palantir-algorithm-data-ukraine-war/ [Accessed 13 May 2024]

Israel Aerospace Industries. (n.d.). HAROP Loitering Munition System. Available at: https://www.iai.co.il/p/harop [Accessed 13 May 2024].

Kim, H. (2024). "North Korea’s Artificial Intelligence Research: Trends and Potential Civilian and Military Applications." 38 North. Available at: https://www.38north.org/2024/01/north-koreas-artificial-intelligence-research-trends-and-potential-civilian-and-military-applications/ [Accessed 13 May 2024].

McKernan, B. and Davies, H. (2024). ‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets. The Guardian. [online] Available at: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes [Accessed 13 May 2024].

Palantir. (n.d.). MetaConstellation. [online] Available at: https://www.palantir.com/offerings/metaconstellation/ [Accessed 13 May 2024].

Stewart, R. and Hinds, G. (2023). Algorithms of war: The use of artificial intelligence in decision making in armed conflict. ICRC Law & Policy Blog. [online] Available at: https://blogs.icrc.org/law-and-policy/2023/10/24/algorithms-of-war-use-of-artificial-intelligence-decision-making-armed-conflict/ [Accessed 13 May 2024].

Stewart, E. (2023). Survey of PRC Drone Swarm Inventions. China Aerospace Studies Institute. Available at: https://www.airuniversity.af.edu/Portals/10/CASI/documents/Research/Other-Topics/2023-10-09%20Survey%20of%20PRC%20Drone%20Swarm%20Inventions.pdf [Accessed 10 May 2024].

Stokes, J. (2024). "Military Artificial Intelligence, the People’s Liberation Army, and U.S.-China Strategic Competition." [Testimony]. Available at: https://s3.us-east-1.amazonaws.com/files.cnas.org/documents/Jacob_Stokes_Testimony.pdf [Accessed 13 May 2024].

The Guardian. (2024). Israel Defence Forces’ response to claims about use of ‘Lavender’ AI database in Gaza. [online] Available at: https://www.theguardian.com/world/2024/apr/03/israel-defence-forces-response-to-claims-about-use-of-lavender-ai-database-in-gaza [Accessed 13 May 2024].

Thornton, R., & Miron, M. (2020). "Towards the 'Third Revolution in Military Affairs': The Russian Military’s Use of AI-Enabled Cyber Warfare." The RUSI Journal, 165(3), 12–21. Available at: https://doi.org/10.1080/03071847.2020.1765514 [Accessed 13 May 2024].

Vassilev, A. et al. (2024). Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. National Institute of Standards and Technology. Available at: https://csrc.nist.gov/pubs/ai/100/2/e2023/final [Accessed 13 May 2024].

More in AI: The New Paradigm of War