These aren’t AI firms, they’re defense contractors. We can’t let them hide behind their models
In recent years, the intersection of artificial intelligence (AI) and military operations has raised significant ethical concerns. The emergence of AI-driven warfare has transformed how conflicts are conducted, often leading to devastating consequences, particularly for civilians. This article explores the implications of AI in military operations, particularly in the context of recent conflicts in Gaza and Iran.
The Fog Procedure: A Military Strategy
One of the military strategies employed by Israel is known as the “fog procedure.” This tactic, first utilized during the second intifada, involves soldiers firing into conditions of low visibility, operating under the assumption that an unseen threat may be present. This approach represents a form of violence sanctioned by ignorance, where the act of shooting is justified as a deterrent.
AI Warfare: The New Frontier
Israel’s recent conflict in Gaza has been characterized as the first major “AI war.” In this context, AI systems played a pivotal role in generating lists of individuals deemed targets, based on the analysis of billions of data points. These algorithms assessed the likelihood that a person was affiliated with militant groups like Hamas or Islamic Jihad.
Chosen Blindness: The Role of Algorithms
The concept of “chosen blindness” is central to understanding the implications of AI in warfare. In both military strategies and algorithmic targeting, the lack of clarity and accountability creates a dangerous environment. Decisions about life and death are often made by systems that cannot explain their reasoning, leading to tragic outcomes.
The Minab School Strike: A Case Study
One of the most tragic examples of AI’s role in warfare occurred during the US-Israeli conflict with Iran. The strike on the Shajareh Tayyebeh elementary school in Minab resulted in the deaths of at least 168 people, most of whom were children. The targeting was described as “incredibly accurate,” yet the intelligence used was outdated, failing to account for the school’s civilian status.
The Consequences of Outdated Intelligence
While the precision of the weapons used was commendable, the underlying intelligence was flawed. The school had been repurposed for civilian use nearly a decade prior, but this information was not reflected in the targeting databases. This incident highlights the dangers of relying on AI systems that lack mechanisms for updating critical information.
Accountability in AI Warfare
The question of accountability in the context of AI warfare is complex. When civilian casualties occur, it is tempting to attribute blame solely to the algorithm. However, the logic that dehumanizes individuals and categorizes them as acceptable collateral damage predates AI technology.
The Case of the Bakr Family
In July 2014, four boys from the Bakr family were killed on a beach in Gaza. This tragic event occurred without the involvement of AI; rather, it was a result of pre-classified targeting that misidentified the boys as threats. The incident underscores the fact that the logic of targeting errors is not a new phenomenon but rather an extension of existing military practices.
Statistics and Civilian Casualties
According to a classified Israeli military database, of the more than 53,000 deaths recorded in Gaza, only about 17% were identified as Hamas and Islamic Jihad fighters. This statistic suggests that a staggering 83% of casualties were civilians, raising serious questions about the efficacy and morality of current military strategies.
The Role of AI in Targeting Decisions
AI targeting systems do not create the logic that leads to civilian casualties; rather, they automate and encode existing biases across vast datasets. When a school is misclassified as a military compound, it reflects a systemic issue rather than a mere malfunction of technology. The implications of this are profound, as these systems operate without meaningful human oversight.
International Humanitarian Law and AI
International humanitarian law mandates that military operations must adhere to strict guidelines to protect civilians. Commanders are required to verify that targets are legitimate military objectives and to take all feasible precautions to minimize civilian harm. However, delegating these responsibilities to opaque AI systems undermines these legal obligations.
The Need for Human Oversight
In Gaza, algorithms processed extensive data on individuals, generating lists of potential targets based on statistical inferences rather than confirmed identities. This method of targeting lacks the necessary human verification, raising ethical concerns about the use of AI in military operations.
Conclusion
The integration of AI into military operations presents significant ethical and humanitarian challenges. As we move forward, it is crucial to ensure that accountability and oversight remain central to military decision-making processes. The consequences of failing to regulate AI warfare are already evident, and the need for a reevaluation of these technologies is more urgent than ever.
Note: The implications of AI in warfare are complex and multifaceted, requiring careful consideration and dialogue among policymakers, technologists, and ethicists.

