Adoption of AI Military Technology Accelerating

"Artificial intelligence recommends which targets to attack, and in some ways, it is much faster than the speed of human thought. This enables both scale and speed. It is possible to carry out assassination-style strikes while simultaneously neutralizing the adversary regime’s ability to respond with all of its airborne ballistic missiles. In past wars, such operations would have taken days or even weeks, but now everything can be accomplished at once." (Kill chain expert, Professor Craig Jones of Newcastle University)


AP Yonhap News

AP Yonhap News

View original image

During the first 24 hours of airstrikes in the current U.S.-Iran war, the U.S. military struck around 1,000 targets, resulting in approximately 200 deaths and 750 injuries. As it became known that artificial intelligence (AI) was used in the airstrikes that took place in just one day, AI has emerged as a ‘strategist’ in warfare. The weaponization of AI is transforming the nature of war into a contest of speed, as AI shortens the ‘kill chain’—the process from target identification to legal authorization and attack.


According to sources such as the Washington Post, the U.S. military reportedly used an AI-based military intelligence platform called the Maven Smart System in its initial airstrikes against Iran. The Maven Smart System, powered by Palantir’s technology, uses AI to analyze image data from drones and satellite footage to automatically identify targets. By visualizing battlefield targets and the location of supplies, it supports more efficient and rapid decision-making. Since its launch in the U.S. Department of Defense’s Maven Project in 2017, it has aimed to maximize operational efficiency through data analysis.


This time, it was revealed that Anthropic’s AI model, Claude, is embedded in the Maven System, causing further shock. Unlike previous systems that merely classified targets, now AI proposes operational strategies to human commanders.

Examining the AI Used in War: Focused on Identifying Enemy Forces and Supplies
[Into the World of AI] "AI-Powered Warfare Is a Battle of Speed"...A Look at AI Used in War View original image

Recent wars have been described as advanced wars thanks to the use of AI. AI has so far been used mainly to identify the location of enemy forces and supplies. The AI tactical program ‘GIS Arta’ in the Russia-Ukraine war is a representative example. GIS Arta is a program that analyzes information collected from various sensors such as drones and satellites using AI to suggest optimal attack routes and target locations. Ukraine used this technology to repel more than 1,500 Russian troops and around 70 tanks attempting to cross the Siverskyi Donets River. Ukraine also located Russian troops and military equipment by leveraging the U.S. Maven system.


In the Gaza war, AI systems were also deployed to strike enemy forces. Israel used the AI target analysis system ‘Lavender’ to filter out low-ranking militants and analyze target priorities. Notably, it identified up to 37,000 Palestinian men potentially linked to Hamas, which caused shock. With the AI-based decision support system ‘Gospel’, targets such as buildings, weapons, and command facilities were generated.


The adoption of AI military technology is accelerating. According to the Stockholm International Peace Research Institute’s report last month, ‘Responsible Procurement of Military Artificial Intelligence,’ AI is recognized as a technology that can amplify military power amid various confrontations such as the U.S.-China strategic competition. Countries are witnessing the influence of AI technology repeatedly used in actual battlefields and are rapidly integrating AI into warfare.

Humans Disappearing from the Battlefield… Responsibility Must Be Maintained

However, there are concerns that AI is ushering in an era of rapid bombardment and diminishing human responsibility in warfare. As the time for decision-making is shortened, humans may be reduced to merely formal approvals of automated attack plans. David Leslie, Professor at Queen Mary University of London, explained, "Humans who must make attack decisions may feel less responsible for the outcomes because AI is taking over the process."


The issues of responsibility and governance in the military use of AI remain challenges for the future.



Within the international community, the REAIM (Responsible Artificial Intelligence in the Military Domain) Summit has been held and declarations issued to strengthen human accountability for AI-based weapons systems. These declarations advocate for the assessment of the risks posed by military AI in advance and enhanced training for operational personnel. However, establishing practical governance remains difficult. The declarations are not legally binding, and countries such as the United States and China have not participated in this year’s summit. At the summit held last month, only 35 out of 85 participating countries signed the declaration that included principles for controlling AI in warfare.


This content was produced with the assistance of AI translation services.

© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Today’s Briefing