May 1, 2026
Statecraft and Faultlines 12: AI Warfare, Civilian Harm, and International Humanitarian Law
Business Iran North America Opinion Politics

Statecraft and Faultlines 12: AI Warfare, Civilian Harm, and International Humanitarian Law

by Scott Douglas Jacobsen and Irina Tsukerman

Irina Tsukerman is a human rights and national security attorney based in New York and Connecticut. She earned her Bachelor of Arts in National and Intercultural Studies and Middle East Studies from Fordham University in 2006, followed by a Juris Doctor from Fordham University School of Law in 2009. She operates a boutique national security law practice. She serves as President of Scarab Rising, Inc., a media and security strategic advisory firm. Additionally, she is the Editor-in-Chief of The Washington Outsider, which focuses on foreign policy, geopolitics, security, and human rights. She is actively involved in several professional organizations, including the American Bar Association’s Energy, Environment, and Science and Technology Sections, where she serves as Program Vice Chair in the Oil and Gas Committee. She is also a member of the New York City Bar Association. She serves on the Middle East and North Africa Affairs Committee and affiliates with the Foreign and Comparative Law Committee.

In this interview, Scott Douglas Jacobsen speaks with Irina Tsukerman about civilian casualties, artificial intelligence, and international humanitarian law in modern conflict. Tsukerman examines the legal and ethical risks of weak human oversight in AI-assisted military decisions, referencing scrutiny surrounding the reported Minab school strike in Iran. She argues that urban density magnifies civilian vulnerability and complicates distinction, proportionality, and precaution in warfare. The discussion also addresses children in conflict zones, genocide allegations as a high-threshold legal category, and the difference between deliberate attacks, tragic errors, negligence, and command responsibility under evolving laws governing contemporary war in armed conflict today.

Scott Douglas Jacobsen: I can open with a broader question, because I think it matters. Have you noticed any trends that you consider important regarding international principles of state power? I am not sure whether this is the right moment to discuss civilian casualties, but perhaps it is.

Irina Tsukerman: I think it is. Civilian casualties are absolutely relevant, and there has been growing discussion of them in the news. At the same time, in the United States the broader debate has not focused primarily on civilian harm. A substantial part of the criticism of Trump’s war policy has instead centered on whether the campaign is effective and whether resources are being used wisely. Even within the administration, there are now public signs of debate about whether the United States should continue the war or move toward claiming success and exiting it.

There has also been serious scrutiny of the reported U.S. strike on a school in Iran. Reuters reported that the Pentagon elevated its investigation into the February 28 strike on the Shajareh Tayyebeh School in Minab after preliminary findings suggested U.S. forces may have mistakenly used outdated targeting data. That does not establish deliberate targeting of civilians, but it does point to a potentially grave intelligence failure with deadly consequences.

One claim should be handled carefully: I did not verify the specific assertion that artificial intelligence “hallucinated” and directly caused the strike. Reuters reported preliminary findings involving outdated targeting data. The available reporting points to possible targeting or intelligence failures still under investigation, not a proven case of autonomous AI error causing the attack.

The broader lesson is still stark. If military planners rely too heavily on automated or poorly corroborated systems, and if human verification is weak, the result can be catastrophic. The central issue is less a cartoon-villain desire to kill civilians than the deadly combination of rushed decisions, inadequate verification, and technological overconfidence. That is a very modern species of disaster: the bureaucratic machine wearing a shiny algorithmic hat.

Jacobsen: Are you referring to human-in-the-loop principles?

Tsukerman: Yes, that is what I have in mind.

More broadly, the central issue is the role of meaningful human oversight in military uses of artificial intelligence. That concern is very much alive in current policy debates. I was able to verify reporting that Anthropic has faced Pentagon-related commercial pressure and reputational concerns tied to its government business, and that Claude has been used in at least some U.S. military contexts. The Pentagon called a human-in-the-loop requirement “treasonous”; Pete Hegseth publicly used that exact language against Anthropic, or that there is a confirmed contract-breach dispute over this specific strike. 

The Pentagon is under serious scrutiny over the reported strike on the Shajareh Tayyebeh School in Minab, Iran. Reuters reported that preliminary findings pointed to the possible use of outdated targeting data, and the Pentagon has elevated its investigation into the incident. Reuters also found extensive public evidence showing that the site had long functioned as a school, which raises serious questions about target verification.

Major ethical and operational concerns remain, because any military use of AI or automated analytical tools without rigorous human verification can produce catastrophic results. If it is ultimately confirmed that a school was directly struck, the political and reputational consequences will be severe. 

Such an outcome would undercut any claim that the operation was tightly executed and would hand Iran a powerful propaganda argument. It would also intensify criticism in Congress and abroad, especially if investigators conclude that the strike resulted from poor verification rather than an unavoidable fog-of-war error. Reuters has also reported wider concern over civilian casualties and over the strategic effectiveness of the broader war effort.

Civilian harm is also not confined to Iran itself. Reuters has reported deaths, injuries, and infrastructure damage across the Gulf region and elsewhere as the conflict widened, including attacks and evacuations affecting multiple states. Even where the United States did not directly strike those countries, the humanitarian and political fallout is still being connected to the Trump administration’s decisions because they helped shape the escalation.

Jacobsen: When it comes to contemporary warfare beyond AI, what about countries that are largely rural and agricultural, compared with countries that are heavily urbanized? In some conflicts, only the cities are attacked. The density of civilian populations, both horizontally and vertically, seems to increase the likelihood of civilian casualties. What does that mean for international law and for the conduct of war involving ground troops, airstrikes, artillery, and other forms of attack? How does population density change those considerations?

Tsukerman: In the case of the United States, I do not see evidence that it deliberately set out to violate international humanitarian law with respect to civilians. If the reported school strike is confirmed, it appears more likely to have been a tragic error than a deliberate attack on civilians, though the investigation is still ongoing.

That said, there are major ethical and legal questions surrounding the use of AI in conflict. International discussions have been underway for years on how to regulate autonomous systems and preserve meaningful human control, but the law remains incomplete. International humanitarian law still requires distinction, proportionality, and precautions in attack, yet applying those principles in densely populated urban settings is profoundly difficult. 

Cities compress civilians, homes, schools, hospitals, transport networks, and military targets into the same physical space. That does not erase legal obligations; it makes compliance harder and failures deadlier. The law is not a magic wand. It is a framework, and frameworks do not stop missiles by themselves.

International law governing the use of AI in armed conflict is still evolving and has not fully caught up with the technology. States are still debating how autonomous weapon systems should be defined, regulated, and constrained. The International Committee of the Red Cross has called for new legally binding rules, and the U.N. Convention on Certain Conventional Weapons is still holding formal expert sessions on lethal autonomous weapon systems.

Broadly speaking, the strongest instinct in the law and policy debate is to preserve meaningful human judgment over life-and-death decisions. Existing international humanitarian law still applies: parties must distinguish civilians from military targets, avoid disproportionate civilian harm, and take constant care and feasible precautions to spare civilians.

Jacobsen: Another point. Some commentators have made genocide allegations ro third-party international reports argued that certain conduct contains genocidal elements, with respect to the Israel–Hamas war. Those claims have not yet been resolved by a final international legal ruling, which, as an Israeli-Ukrainian lawyer informed, is a much higher threshold and has not been met, yet if at all. More broadly, how should we think about very young civilian populations? When a large share of the population is under eighteen, does that change how civilian deaths should be understood legally? Genocide is a specific legal finding with a very high threshold, especially regarding intent. 

Tsukerman: International humanitarian law does not create one category of civilian protection for adults and another for children in the basic question of civilian status. Civilians are protected unless and for such time as they directly participate in hostilities. But age matters enormously in practice because commanders must assess foreseeable civilian harm when applying proportionality and precautions. In a very young population, the foreseeable risk to children may be higher, which increases the practical importance of verification, warnings where possible, and choosing means and methods that reduce civilian harm.

Jacobsen: Let me end with a hypothetical. Suppose a strike was aimed elsewhere, but shrapnel crossed into a nearby school and injured children. How does international law treat an accident like that as opposed to a deliberate strike? How do you assess leadership responsibility?

Tsukerman: Unfortunately, the answer turns on more than intent alone. A deliberate attack on civilians is prohibited. But an accident is not automatically lawful simply because the attacker says it was unintended. Investigators would ask whether the target was lawful, whether commanders took feasible precautions, whether the civilian harm was foreseeable, and whether the expected harm was excessive relative to the anticipated military advantage. If those duties were ignored, responsibility can still arise even without proof that civilians were the intended target.

In Minab, that is why the facts matter so much. Reuters reported that investigators are examining whether outdated targeting data caused U.S. forces to confuse a school with an adjoining military facility. Until that investigation is complete, it remains unclear whether the school itself was the intended target, whether the attack was directed at the adjacent base and struck the school, or whether multiple failures compounded the result.

If the evidence ultimately shows that decision-makers dispensed with basic verification or tolerated unsafe practices, the issue would no longer be a simple accident. It would become a question of negligence, command responsibility, and whether leadership accepted unlawful levels of risk to civilians. That is why the Pentagon’s elevated investigation matters.

Jacobsen: Thank you for your time.

Tsukerman: Sounds good. Be well and stay safe.

Scott Douglas Jacobsen is a contributor to The Washington Outsider. He is the Founder and Publisher of In-Sight Publishing (ISBN: 978–1–0692343; 978–1–0673505) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369–6885). He writes for International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN, 0018–7399; Online: ISSN, 2163–3576), Basic Income Earth Network (UK Registered Charity 1177066), Humanist Perspectives (ISSN: 1719–6337), A Further Inquiry (SubStack), Vocal, Medium, The Good Men Project, The New Enlightenment Project, The Washington Outsider, rabble.ca, and other media. His bibliography index can be found via the Jacobsen Bank at In-Sight Publishing comprised of more than 10,000 articles, interviews, and republications, in more than 200 outlets.  He has served in national and international leadership roles within humanist and media organizations, held several academic fellowships, and currently serves on several boards. He is a member in good standing in numerous media organizations, including the Canadian Association of Journalists, PEN Canada (CRA: 88916 2541 RR0001), Reporters Without Borders (SIREN: 343 684 221/SIRET: 343 684 221 00041/EIN: 20–0708028), and others.

 

Leave a Reply

Your email address will not be published. Required fields are marked *