[Conference presentation] Asser Institute delegation at ESIL 2025: Military applications of AI and Big Tech’s role in reconstructing international law

Published 25 September 2025
By Taylor Kate Woodcock

Photo: T.M.C. Asser Institute 

From 11–13 September, a delegation from the Asser Institute attended the 2025 Annual European Society of International Law (ESIL) Conference, organised by the Institute of International and European Union Law at Freie Universität Berlin. This year’s conference theme was on the many facets of ‘reconstructing international law’. The conference was attended by academic director and ESIL board member Machiko Kanetake, senior researcher and strand coordinator Christophe Paulussen, researcher Taylor Kate Woodcock, and publisher at Asser Press Frank Bakker. At the Agora session on ‘Actors of Reconstruction,’ Taylor Woodcock presented on military applications of artificial intelligence (AI) and the role of Big Tech in the reconstruction of international law.

Reconstruction actors

Taylor Woodcock's research relates to how the subjectivities inherent in discourse around, and design of, military AI contribute to shaping legal reasoning with cascading effects for military targeting processes. Current research indicates that AI shapes how we see and understand the world – but who shapes these algorithms? Her presentation explored how today, military innovations are being driven by technology companies, including ‘Big Tech’ conglomerates, but also small and medium-sized enterprises and venture capital. While the military–industrial complex is by no means new, the advent of data-driven applications of AI – such as machine learning (ML) – has brought these companies to the fore in unprecedented ways. In particular, industry has a strong presence in contributing to discourse and developments around military AI.

Industry shaping military AI

Taylor Woodcock argues that private corporations developing AI-enabled technologies for defence contribute to understandings and interpretations of international law. Specifically, decisions in the lifecycle of AI systems can have cascading effects and impact how commanders make these decisions and conduct legal assessments. This justifies greater attention to the lifecycle of AI systems, as once these systems are deployed and used, it is often difficult to resist the ways the AI shapes decision-making.

Whilst AI systems are often framed as objective and neutral, in reality their development entails a significant number of subjective decisions by designers. Seemingly technical decisions on the selection and preparation of data, how architectures are designed, and what counts as accurate performance can, in reality, shape outcomes for end users. For example, decisions about how to label objects or individuals in data can go on to shape how end users decide what is a military object, such as a tank, or a member of a militant group.

In addition to design, industry also plays a role in shaping the discourse around military AI and international law. What tech companies sell becomes what we can imagine, creating and legitimising a significant role for AI in lawful warfare. When industry markets that AI can make militaries better, faster, stronger, we imagine that to mean that it can also make them more lawful.

All of these findings suggest that it is time to take industry seriously as actors of reconstruction in international law.

About Taylor Kate Woodcock

Taylor Kate Woodcock is a researcher at the Asser Institute in the research strand on Disruptive Technologies in Peace and Security. Her research examines the implications of the development and use of military applications of artificial intelligence (AI) for international law, with a specific emphasis on IHL. Her PhD thesis, entitled ‘Human-Machine (Learning) Interactions: War and Law in the AI Era’ and completed as part of the Designing International Law and Ethics into Military Artificial Intelligence (DILEMA) project funded by the Dutch Research Council (NWO).

Read more

[New publication] Rhetoric and regulation: The (limits of) human/AI comparison in debates on military artificial intelligence
The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in military targeting processes, legal and ethical debates often compare who performs better, humans or machines? In a new publication, researchers Klaudia Klonowska and Taylor Kate Woodcock argue for the urgent need to critically examine the assumptions behind the human/AI comparison and its usefulness for legal analysis of military targeting. Read more.

 


Taylor Kate Woodcock LL.M.