
ICSPR: “Algorithms of Killing” Reveal How Israel Turned Artificial Intelligence and Global Technology Companies into a Structure of Genocide in Gaza
Date: 27 January 2026
Press Release
ICSPR: “Algorithms of Killing” Reveal How Israel Turned Artificial Intelligence and Global Technology Companies into a Structure of Genocide in Gaza
The International Commission to Support Palestinian People’s Rights (ICSPR) has released an extensive human rights investigation entitled “Algorithms of Killing: How Israel Turned Artificial Intelligence and Global Technology Companies into a Structure of Genocide in Gaza,” prepared by lawyer Rana Majed Hdeib. The investigation exposes an integrated, algorithm-driven system of extermination that represents a profound shift in Israeli military doctrine and has turned the Gaza Strip into an open laboratory for automated warfare.
The investigation confirms that what is unfolding in Gaza is not the isolated use of technological tools, but rather a comprehensive system that begins with massive data collection, moves through the digital classification of the Palestinian population, and culminates in killing decisions treated as a “computational process” executed at the push of a button, alongside an almost complete erosion of human judgment, ethical responsibility, and legal accountability.
It further explains that the Israeli military has shifted from “human intelligence–based targeting” to “data-driven targeting” through centralized artificial intelligence systems, most notably “Habsora” (The Gospel), which is used to generate hundreds of geographic targets daily by analyzing satellite imagery, drone footage, and intercepted communications. This has transformed warfare into an industrial production line of mass killing, according to descriptions by former Israeli intelligence officers.
The investigation also reveals the use of the “Lavender” system, a human classification algorithm that analyzes personal data of millions of Palestinians in Gaza and assigns them numerical “threat scores” based on communication patterns, movement, and social relationships, without legal evidence. This has turned “digital suspicion” into a basis for direct targeting, in blatant violation of the principles of distinction and proportionality under international humanitarian law.
The report highlights the use of time-based tracking and real-time targeting systems that identify the “optimal moment” to strike, often when the targeted individual is inside their home with family members, leading to mass civilian casualties. It references international media reports indicating that military directives allowed the killing of dozens of civilians in exchange for targeting a single person classified algorithmically.
The investigation sheds light on the decisive role of military cloud infrastructure, particularly Project Nimbus, operated by Google and Amazon, which provided massive data storage and processing capacities that enabled the operation of targeting algorithms and accelerated military decision-making to the point of emptying legal verification obligations of their substance. It also points to the advanced role of Microsoft, through Azure services, in running algorithms, analyzing unstructured data, and facial recognition, rendering the company a “structural enabler” of military operations.
The investigation affirms that these companies can no longer be considered neutral actors, but rather bear direct or indirect responsibility under the UN Guiding Principles on Business and Human Rights, especially given their prior knowledge of the use of their technologies in military operations targeting civilians and their continued provision of services despite this awareness.
The report further exposes the militarization of humanitarian data, including the use of aid-related information and facial recognition at crossings and so-called “safe corridors,” linking them to targeting systems. This has turned attempts to survive and access food and medicine into “digital traps” and stripped humanitarian work of its neutrality.
Additionally, the investigation documents the use of artificial intelligence in digital disinformation, including deepfakes, fake accounts, and algorithms suppressing Palestinian content, contributing to the normalization of massacres and the creation of an environment of impunity. It also details the use of these technologies in assassinating senior figures through geospatial intelligence, drones, and advanced tracking systems.
ICSPR emphasized that these practices constitute grave breaches of international humanitarian law and amount to war crimes and crimes against humanity, while also establishing a dangerous precedent for the normalization of “automated genocide.” The investigation warns that the success of this model could make it exportable to other conflicts worldwide.
The report calls for international criminal accountability that includes not only military leaders but also system engineers and executives of the technology companies involved, the development of a new legal definition of “digital complicity in genocide,” the imposition of an international ban on the use of lethal AI systems without meaningful human oversight, and the protection of the data of populations under occupation from military exploitation.
The International Commission to Support Palestinian People’s Rights (ICSPR) stressed that what has occurred in Gaza constitutes a profound ethical and legal warning to the world, and that international silence in the face of this system paves the way for a future in which wars are run by blind machines that strip victims of their humanity and allow perpetrators to escape accountability.



