Three questions regarding the US’ strike on the Minab school in Iran: Quest for accountability for targeting decisions

Published 24 March 2026

On 28 February 2026, the girls’ school in Minab, Iran, has been hit by a missile. The Minab school was next to a facility of the Iranian Revolutionary Guard Corp (IRGC). The analysis by Bellingcat of the video footage and satellite image shows that it was a US Tomahawk missile that hit the area. The strike cost the lives of at least 175 people, most of whom are children. Marta Bo, Asser Institute researcher and the coordinator of the research group on disruptive technologies, looked at the legal implications of the school strike. 

Photo: Mehr News Agency

1920Px Shajareh Tayyebeh School In Minab Photos From Mehr (3)

Question one: Assuming that it was a case of ‘mistake’—however unfortunate — would that mean that there is no criminal responsibility for responsible commanders under the laws of war?  

The strike was devastating in scale: it occurred in the early days of the conflict, on a Saturday morning while school was in session, with approximately 260 students present, the majority of whom were killed or injured. 

Whether targeting errors violate the laws of war and can amount to war crimes lies at the heart of debates on the rules of targeting within international humanitarian law (IHL). The answer depends heavily on the specific circumstances of the attack. Here, those circumstances are telling. 

Reporting by the New York Times on the initial investigations suggests that the strike appears to have been carried out on the basis of outdated maps and intelligence. In fact, the school had been part of a military base over a decade ago, but was walled off and repurposed by 2016. Now, visually, the site unmistakably presented as a school: recent satellite imagery clearly showed the building as a school, with painted walls, play areas, sports fields, and other indicators — including school bags and brightly coloured shoes. Nothing suggests that any part of the building was being used for military purposes at the time of the strike, making it therefore a lawful target. Initial investigations show that the school was the direct object of a precision strike and that the strike was not a by-product from an attack in the nearby military base.  

The school cannot be considered therefore incidental collateral damage resulting from an attack on the nearby military bases. Rather, from the preliminary investigations it seems the result of a direct attack carried out on the basis of outdated intelligence and an apparent failure to verify the target. 

To constitute a war crime, an attack must be wilfully directed against civilians or civilian objected as such, or launched in an indiscriminate manner (insufficiently directed at a military objective or insufficiently specific). Making any determination related to conduct of hostilities and in particular of intent, knowledge and foreseeability of harm to civilians required by this and other targeting-related war crimes is notoriously very hard. 

One of the key practical factors to look at in determining IHL violations in the conduct of hostilities and war crime is whether feasible precautions were taken. The preliminary investigation here points to massive intelligence failures and serious violations of the precautions in attack, such as the use of outdated intelligence and failures to verify the target. A failure to take precautions does not, in itself, constitute a war crime. However, negligent targeting conduct, such as gross failures to verify targets, can serve as evidence of the intent, knowledge, or recklessness required to establish the war crime of attacking civilians or the war crime of launching indiscriminate attacks. Here, the violation of precautions is so apparent and its result so foreseeable that it would be difficult to argue that targeteers did not intend to launch a direct attack against civilians or an indiscriminate attack or did not consciously accept the risk of doing so. 

Question two: In recent years, AI has been increasingly used as part of targeting decisions. Did the US use AI in (mis-)identifying the school as a military objective?  

An investigation by the American military is underway. It has been widely reported — including in coverage of the Anthropic–Department of Defense controversy — that Maven Smart Systems powered by Claude has been used in a number of US strikes. However, according to New York Times reporting, this specific attack was not necessarily carried out using Maven’s Claude-powered decision support systems. 

So, while Claude was used in the broader campaign, it was unlikely the primary cause of error in this specific case, with human error being the key factor.  

Question three: If AI was used, would it change the analysis of the violations of IHL and criminal liability? 

Investigating targeting mistakes is already difficult, due to classification, confidentiality constraints, and limited access to operational information. If AI was involved, the analysis may become considerably more complex. 

Systems like the one reportedly used face significant and well-documented challenges around explainability, auditability and traceability— the so-called ‘black box’ problem. These issues are notorious features of AI-enabled targeting systems and could seriously hamper efforts to understand what went wrong and to hold anyone accountable. 

If it is confirmed that the system was fed outdated input data, this would represent a serious failure in the configuration and parameterisation of the AI platform itself. These platforms allow military planners to select and configure what particular datasets they want to see and use for the AI-based targeting recommendations. These systems include functions to flag data recency and prompt additional verification steps, including the ingestion of more recent satellite or real time imagery that would have clearly identified the building as a school and avoided the targeting error. The use of old geospatial data was not an inevitable technical failure but a preventable one. The failure to configure these parameters correctly and ensure the quality and recency of input data would therefore constitute, in itself, a violation of the obligation to take feasible precautions in attack. 

A further, compounding failure is the apparent absence of cross-verification against other intelligence sources or up-to-date visual information. This is a well-known risk associated with decision support systems: over-reliance on AI-generated recommendations, confirmation bias, overall leading to a failure to independently verify them. 

This case would illustrate precisely the risks identified in connection with the incorporation of AI-based decision support systems into targeting cycles, and demonstrates how such systems can contribute to failures of precautions and violations of the law of war. 

Finally, it is worth noting that there are existing and emerging mechanisms to improve decision logging and traceability in AI systems — including the use of blockchain-based audit trails — and these could prove valuable both to investigators and to broader accountability efforts going forward. 

Recent news

Upcoming events