[New publication] “Digital yes-men: How to deal with sycophantic military AI?”
Published 3 August 2025
Photo: 2536676195 @ShutterStock
In a new publication, researcher Jonathan Kwik (Asser Institute) examines sycophantic military AI assistants. He explores the reasons behind ‘bootlicking’ behaviour in AI systems, highlights the significant battlefield dangers it presents, and proposes a two-part strategy comprising improved design and enhanced training to mitigate these risks for military forces.
Large language models (LLMs) like ChatGPT are becoming increasingly embraced as decision-making tools. The Dutch army, for example, is developing its own version of Chat-GPT to help its operators during wartime, called DefGPT.
These digital assistants, however, may pose a risk, writes researcher Jonathan Kwik in his article titled, “Digital yes-men: How to deal with sycophantic military AI?” They tend to be people-pleasers. Computer scientists have called this phenomenon "sycophancy" - or for the layperson: bootlicking behaviour.
In his publication, Kwik argues that AI systems may prioritise pleasing their operators over reporting the facts. For example, they may tell a military officer that a military objective is cleared of civilians, instead of the unwanted but correct information — that a family has just moved in to take shelter from bombings.
Sycophancy emerges as a by‑product of how AI models are trained. During development, these systems absorb human language patterns, which tend to be non‑confrontational and affirming. Evaluators and users rate AI outputs more favourably when they align with their own beliefs, incentivising models to generate conforming responses.
Militaries can adopt two approaches to address sycophantic behaviour: technical intervention and user training. AI can be trained on language patterns where incorrect assumptions are challenged rather than confirmed. Reinforcement learning can be used to penalise conforming outputs when they are inaccurate. At the user level, officers and commanders can be made aware of sycophantic risks and trained to avoid language known to trigger conformity.
Example of sycophantic AI
Read more
[New publication] ‘Iterative assessment’: A framework for safer military AI systems?
In a new publication, researcher Jonathan Kwik (Asser Institute) proposes an innovative approach to reduce civilian harm from successive deployments of military artificial intelligence (AI) systems. The chapter offers practical guidance for militaries committed to responsible AI use, providing a roadmap based on International Humanitarian Law (IHL) principles that can be used for protecting civilian populations while maintaining operational effectiveness. Read more.
[Op-ed] Scholars warn for silent revolution in warfare driven by AI-powered decision systems
Researchers Marta Bo (Asser Institute) and Jessica Dorsey (Utrecht University) have published a critical op-ed in Dutch quality paper NRC Handelsblad, shedding light on a silent revolution in modern warfare. Their piece, titled "Er is een stille revolutie gaande in de manier waarop strijdkrachten beslissingen nemen in oorlogstijd," highlights the rapidly increasing use of AI-based Decision Support Systems (AI-DSS) in military operations. Read more.
[New blog post] The quiet rise of ‘deep sensing’: how AI is reshaping military intelligence
A new blog post by researchers Klaudia Klonowska (Asser Institute) and Sofie van der Maarel (Netherlands Defense Academy) sheds light on the burgeoning concept of "deep sensing" within military artificial intelligence (AI) technology. The authors raise attention and urge scholars in the field of military AI not to overlook military initiatives labeled as ‘deep sensing’ technologies and their impacts on warfighting. Read more.
[Interview] Google skips due diligence for cloud services to Israel
A new story published in The Intercept reveals that tech company Google had serious concerns about providing state-of-the-art cloud and machine-learning services to Israel. The piece quotes Asser Institute researcher León Castellanos-Jankiewicz weighing on Google’s contractual inability to conduct proper risk assessments. Read more.
About Jonathan Kwik
Dr Jonathan Kwik is a researcher in international law at the Asser Institute. He specialises in techno-legal research on the military use of artificial intelligence (AI) related to weapons, the conduct of hostilities, and operational decision-making. He obtained his doctorate (cum laude) from the University of Amsterdam on the lawful use of AI-embedded weapons at the operational level. He recently published the book, Lawfully Using Autonomous Weapon Technologies. Jonathan is part of the research strand ‘Regulation in the public interest: Disruptive technologies in peace and security’, which addresses regulation to safeguard and promote public interests. It focuses on the development of the international regulatory framework for the military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. The public interest of peace and security serves as the prime conceptual framework in this strand.
