[New blog post] ‘What law applies when the alien worldship attacks?’

Published 16 January 2026

In a thought-provoking new blog post, authors Jonathan Kwik (Asser Institute) and Hendrik Mathis Drößler (University of Salzburg) are using a spectacular fictional conflict - an alien world-ship hovering over Europe - to demonstrate how fictional scenarios can be used for teaching, applying and stress-testing international humanitarian law (IHL). 

Cluster D (3)

Written by Jonathan Kwik (Asser Institute) and Hendrik Mathis Drößler (University of Salzburg), the post for the Völkerrechtsblog drops the reader into a makeshift war room below the Peace Palace, where a general and her legal adviser are scrambling to interpret the established laws of war in the face of interstellar combat. Their scenario poses a question the Geneva Conventions were never meant to answer: does humanitarian law govern a conflict against sentient non-humans?

Not simply sci-fi

The authors, however, stress this is not simply sci-fi for its own sake. Instead, it is presented as a ready-to-use case study designed for lecturers, military and government legal advisers. The authors propose that fiction can be used as a tool to discuss many current debates in IHL though an apolitical - and often more light-hearted - context. Fictional crises thereby can become practical instruments for teaching and stress-testing IHL principles, in an apolitical and often more accessible context.

For example, the scenario challenges the thresholds of engagement by asking whether combat against aliens would be governed by IHL applicable to international or non-international armed conflict. It also engages with contemporary debates on deterritorialised statehood - a crucial factor when the nature of territory, actors, and intensity of conflict are contested.

Thought experiment

Furthermore, the post puts targeting law through its paces. The city-sized, mixed-use alien vessel becomes a thought experiment for interpreting the core IHL principles of distinction (between combatants and civilians/objects), feasible precautions in attack, and proportionality - all echoing the dilemmas faced in dense, real-world urban battlefields.

Finally, the authors probe the very edge of current law by questioning personhood and protection: if sentient non-humans are engaged in the fight, do they count as "objects," "protected persons," or does their presence demand a principled extension of humanity? This, the authors suggest, has wider implications for the interpretation and regulation of lethal autonomous weapon systems and other artificial agents in future conflicts.

The authors invite readers to use such scenarios in classes, exercises, and informal discussions, urging them to share where the current rules hold firm and where they strain, thereby feeding those critical insights into the ongoing international discourse on international humanitarian law. 

Read the full blog post.


About Jonathan Kwik

Dr Jonathan Kwik is a researcher in international law at the Asser Institute attached to the ELSA Lab Defence project. He specialises in techno-legal research on the military use of artificial intelligence (AI) related to weapons, the conduct of hostilities, and operational decision-making. Read more. Kwik is part of the Asser Insitute’s research strand ‘Regulation in the public interest: Disruptive technologies in peace and security’


Read more

[New blogpost] Can AI independently decide to commit wartime treachery?
Defensive AI systems could independently learn to violate the laws of war by mimicking humanitarian organisations, a new analysis warns. In a recent blogpost on Articles of War, researchers Jonathan Kwik and Adriaan Wiese argue that autonomous ‘Cyber Defence Agents’ may eventually spoof protected symbols, such as the Red Cross emblem, to trick enemies into aborting attacks. Read more.

[Policy brief] Can Rules of Engagement and military directives effectively control military AI?
As Artificial Intelligence (AI) continues to reshape modern warfare, the need for effective control over military AI systems has become increasingly urgent. Insights from a recent expert workshop, led by researcher Jonathan Kwik and colleagues, underscore the need for strategic, flexible, and context-specific AI guidelines. Read more.  

[New publication] ‘Iterative assessment’: A framework for safer military AI systems?
In a new publication, researcher Jonathan Kwik (Asser Institute) proposes an innovative approach to reduce civilian harm from successive deployments of military artificial intelligence (AI) systems. The chapter offers practical guidance for militaries committed to responsible AI use, providing a roadmap based on International Humanitarian Law (IHL) principles that can be used for protecting civilian populations while maintaining operational effectiveness. Read more.