top of page
Search

Code Red: How AI-Accelerated Warfare Is Erasing Proportionality and Violating Human Rights

Introduction: The Speed of Death in the Age of Artificial Intelligence


Artificial intelligence (AI) is hastening mortality at a pace beyond that of human intellect. Once engaged, autonomous weapons systems (AWS) may choose and strike targets free of human involvement. This is more than just a technological improvement; it is a paradigm shift in conflicts. Life-and-death choices are outsourced to algorithms and sensor-based logic, so removing human judgment and moral responsibility. The International Committee of the Red Cross (ICRC)warns such systems “raise fundamental ethical concerns for humanity.”


While addressing the United Nations General Assembly’s 75th meeting,  Pope Francis remarked that these technologies “irreversibly alter the nature of warfare, detaching it further from human agency.” The traditional chain of accountability, which constitutes soldier, commander, policy-maker, collapses when weapons operate at machine speed, governed by data rather than discretion.


International human rights law and international humanitarian law (IHL) are seriously affected by this breakdown. Key legal principles of distinction, proportionality, and the right to life rely on contextual human judgment, an ability that artificial intelligence systems have not been demonstrated to have.


Proportionality in IHL and Its Human Rights Counterparts


The principle of proportionality, a cornerstone of IHL, demands that incidental harm to civilians must not be excessive in relation to the anticipated military advantage. Human Rights Watch (HRW) has argued that “robots cannot be programmed to replicate the psychological processes in human judgment” essential for assessing proportionality. The ICRC concurs, noting the difficulty of “anticipating and limiting the effects” of autonomous weapons and warning of serious legal and ethical concerns.


This is not a theoretical concern. In Gaza, the Israeli military has deployed tools like Lavender, which uses machine learning to assign suspicion scores to individuals. The algorithm reportedly compiles surveillance data to determine potential affiliations with armed groups and marks individuals as legitimate targets based on that classification. According to HRW, this “positive unlabelled learning” relies on unverified assumptions and biased data, making it deeply incompatible with the IHL presumption of civilian status.


The Compression of Judgment: AI, Targeting, and Temporal Collapse


AI compresses the decision-making process from minutes into milliseconds. Military decision-support systems now aid in selecting, prioritizing, and even recommending targets. But these systems introduce an opacity that undermines legal compliance, and their risks and inefficacies are a permanent fact that cannot be ignored. 


Project Maven is the U.S. military’s flagship AI program. Many researchers have revealed the shortcomings associated with it. Human analysts correctly identify tanks 84% of the time, Maven performs at 60%, dropping to 30% in adverse weather, which, as a consequence, leads to the confusion of military targets, thus violating the human rights of civilians. Despite this, Project Maven has been actively deployed in live ammunition exercises such as the ‘Scarlet Dragon.’ While building the AWS, the reasoning generally given is that, ‘There isn’t a single, ready-made AI system that can just be plugged in;  the military has to build it piece by piece.’ Consequently, what we witness is that humans are supposed to work alongside the technology, and in practice, the machine often ends up taking the lead, ultimately causing disproportionate deaths and violations of human rights. 


Civilian Harm Without Accountability: Human Rights Violations in Real-Time


The deployment of AI in targeting cycles risks causing civilian harm with little to no accountability. The ICRC cautions that autonomous systems may “trigger a strike on the basis of a generalized ‘target profile,’” removing any identifiable human decision at the point of execution.


In Gaza, The Gospel, an Israeli targeting tool, generates strike lists that include civilian infrastructure, such as homes or so-called “power targets,” based on the psychological impact on the population. It is noted that this method could result in attacks based on social engineering rather than military necessity. When overlaid with cell tower triangulation data to measure evacuations, these tools can incorrectly signal areas as devoid of civilians, increasing the risk of wrongful deaths.


Systemic Risk: AI Enables Predictive Guilt and Algorithmic Bias


AI targeting introduces a logic of predictive guilt, where individuals are marked as threats not for their actions, but for their data patterns. It is discovered that systems that “single out members of distinct ethnic groups as targets,” or misclassify civilians as combatants, are in prima facie violation of IHL principles. Lavender uses behavioural traits, such as phone changes or group chats, to assign suspicion scores, without offering due process or opportunity for correction.


Machine learning systems operate as “black boxes” with no auditable trail for legal accountability. Nations using drones to influence morale in conflict zones, even under claims of psychological warfare, risk civilian lives and may undermine the legal and ethical legitimacy of self-defence efforts.


Toward Legal Innovation: Bridging IHL and Human Rights in the Age of AI


The ICRC and UN Secretary-General have jointly called for new legal frameworks to govern AWS. Their recommendation is clear: “Machines with the power and discretion to take lives without human involvement should be prohibited by international law.” They urge strict regulation of AI systems that apply force, with mandatory human control and clear legal standards.


As Bob Work, former U.S. Deputy Defense Secretary, wrote at the launch of Project Maven: “Although we have taken tentative steps... we need to do much more and move much faster to take advantage of recent and future advances.” 


Taking into account the violations of IHL caused by these AWS and AI-operated tools, it can be contended that speed is not the only virtue. Without the capacity to distinguish, to reason, and to weigh harm, AI cannot meet the legal tests of warfare. Hence, employing AWS in the current conflict zones is in clear violation of the human rights of the civilians. 


Conclusion


Autonomous systems are not merely reshaping warfare, they are redrawing the moral and legal boundaries that define it. As machines begin to make decisions previously governed by law and conscience, the framework of proportionality, and with it, human dignity, stands on the brink. Until such systems can demonstrate transparent legal compliance, they must remain under meaningful human control or be prohibited altogether. The question is no longer whether the law will adapt, but whether it can preserve the humanity it was built to protect.


 
 
 

Recent Posts

See All

Comments


  • Instagram
  • Twitter
  • LinkedIn

Join our mailing list

Thanks for subscribing!

ADDRESS 

Centre for Human Rights and Subaltern Studies

National Law University, Delhi 

Sector-14 Dwarka, Delhi - 110078

Please email your queries to chra@nludelhi.ac.in 

© 2023 by Collective for Human Rights Advocacy

bottom of page