[Interview] Asser researcher Sadjad Soltanzadeh: ‘It is important for us to study human activities as much as we study any technological object’Published 30 May 2022
By Diva Estanto
Dr Sadjad Soltanzadeh is a postdoctoral researcher in the ethics and philosophy of technology. He recently published a book, entitled ‘Problem-solving technologies: A user-friendly philosophy’. On 8 June, the Asser Institute and the DILEMA Project are hosting a book launch symposium around this book. Soltanzadeh: ‘What I tried to do in this book is to develop metaphysics of technology, a theory of technology, which is also useful for looking at practical and everyday chases of how humans interact with technology at the subjective level’. In this interview, we explore his research at Asser Instituut as well as how his theory of technology helps frame the questions related to the use of AI in warfare.
What did you do before joining the Asser Instituut?
‘I have a background in a few different fields. I grew up in Iran, where I did my mechanical engineering degree. I started as a mechanical engineer and was mostly involved in some robotic projects. After a couple of years, I became more interested in philosophy, and I did a Master's degree in Philosophy of Science and Technology at the University of Twente in the Netherlands. I really enjoyed the subject, so I did a PhD in philosophy in Australia with CAPPE (Centre for Applied Philosophy and Public Ethics). I also did a degree in education and I practised a bit of high school teaching for a while. But then I decided to re-join academia. So, I did a postdoc degree here in Australia a few years ago. And now I'm in my second postdoc position with Asser.’
What is your main research project at the institute?
‘I am part of the DILEMA group, which is focused on designing international law and ethics into military technologies. One of the central concepts for us is about human and technological agency and how they interact with each other. This includes the autonomous systems, how they function, what are the things that they could or could not do based on legal or ethical reasons. Apart from this, we dig a bit deeper from the questions of whether it's good or bad to have a particular technology to also look at how each technology can impact human agency and how humans and technologies could form teams and jointly achieve certain goals. Mostly, of course, in the military contexts.’
Could you tell us about the philosophy you developed in your book ‘Problem-solving technologies: A user-friendly philosophy’?
‘The philosophy I develop in this book is called activity realism. In this book, I looked at some conceptual limitations in the current state of the philosophy of technology. When we look at the relationship between the theoretical dimensions of philosophy of technology and the more applied and practical dimensions of the field, we find a gap between these two. There are, of course, different reasons why applied and theoretical dimensions are not connected well with each other. I argue that one of the reasons why our theories are not incorporated into more applied arguments is because the theoretical concepts and distinctions that we make are generated in a way that makes them unfit for certain practical applications. So, what I tried to do in this book is to develop a metaphysics of technology, a theory of technology, which is also useful for looking at practical and everyday cases of how humans interact with technology at the subjective level.’
How would you connect your philosophy with the current debate on the use of AI and automated weapons systems in warfare?
‘In the philosophy I developed, when we look at each technology or system, we should also look at how it is incorporated into human activities. We should not look at it in isolation, without seeing how it impacts humans and what sort of things it is supposed to do for humans. We should realise that technologies are not supposed to engage in any activities. What they do is perform certain actions. I make this distinction between engaging in activities and performing actions. The parties that are engaged in activities are humans, technologies are not really engaged in activities. What they do is perform certain tasks for humans. If we want to optimise technologies, we should see what humans want to achieve. When approaching technologies from that perspective, then we also realise that there are certain things that technologies may not be able to do. Not because we don't want them to do it for legal or moral reasons, but because they are unable to do it in the first place.
When you approach technologies from that perspective, then we realise that activities that technologies can or cannot perform often require humans to be involved. For example, in the case of designing autonomous vehicles, I argue that we always need also to have a manual control option. Of course, this argument can also be generalised to a lot of other technological systems, including the ones that are used in warfare. If there are activities that technologies cannot perform, then we have stronger reasons to make sure that humans are involved. Another concern with military AI is that, although they will not be subject to some human error arising from fear, anger or lack of concentration, they will not be able to reliably discriminate between combatants and civilians. This means that if such systems are incorporated in military activities, we need to be aware of what their strengths and limitations are. Otherwise, even if weapons are designed with the most advanced AI, that would not optimise our practices in terms of respecting legal and moral norms and values. In general, when we think about optimisation, we do not want to focus on optimising the technology itself. What we need to focus on is to optimise the activity. This is all linked to the activity realist philosophy: it is important for us to study human activities as much as we study any technology.’
What inspired you to write the book?
‘I shared a house with three others in Canberra. In that old shared house with different tenants coming and going, there were always random things around, and they were a part of the whole culture of the house. For example, there was a canvas that was not painted; the artist hung it there so people can interpret what it is for themselves. Another thing was a hammer, which was hung on the wall because it was aesthetically pleasing, and there was a flashlight that you could turn on to cast a nice light on the hammer. A note was placed next to it, which said that ‘this is not a hammer’. Of course, we were still using the hammer, but most of the time it was just up there as decoration. For instance, we smashed a very old washing machine using the hammer to turn it into a fire pit. These were things that inspired me. We were a bit creative in the way we were using things. At the same time, I was also doing my PhD in philosophy technology and I started developing these ideas. It took quite a few years for all the ideas to develop further and then turn into a book, around five to six years.’
What are your proudest and most challenging moments during the process of writing this book?
‘I think one of the proud moments was when an article that I have used in the book won an international award at SPT 2019. That was a very important paper because it was the essence of the book. If that was not well received, it would have affected the writing of the book. The piece is slightly critical of the current literature, but it's also constructive and opens up a new way of doing philosophy and philosophy of technology. Other than that, there are all these little milestones that get you a bit more excited about the whole thing. For example, when I finished my first full draft. And then after that, when I finished the final draft that I'm sending to the publisher. But overall, I think the proudest moment would be when I actually received the book and I could hold it in my hands.
As for the challenges, I think writing a book is a challenging thing for different reasons. One of them is that you really need perseverance and you need to be interested in the topic. If you're not interested in the topic, there's no way you could write a book. It is always challenging because it's not an overnight project. There are times that you're not happy with it and you feel that you have to revise everything all over again. Those moments could be a bit demoralising. But at the same time, if you have your eyes on the target and personally connect to your ideas, then you can just always stay motivated and really enjoy the process instead of finding it demoralising.’
Why do you think it is important for people working in the field of technology to work together with people from the field of international law?
‘Well, I'm a little bit of a systems thinker. In systems thinking, one of the main ideas is that we shouldn't approach different problems that we face as owned by specific disciplines. When we start looking at things that way, we create more problems. One thing that we should really do is not to focus on our problems from the lens of one particular field. Rather, we should embrace different ideas coming from different fields. This of course can be a slow process because we need to understand each other’s vocabulary. For me, it's very fascinating to work with international lawyers at Asser. Recently, we also have a computer scientist in our team at DILEMA. I think it’s very fascinating to be able to connect with others outside of your niche and arrive at some common understanding of concepts.’
What do you hope to achieve here at the Asser Institute?
‘Some of the things I have already achieved at different levels. At the personal level, I would say last year was probably the most productive year of my academic career. At the same time, one thing that I’ve achieved is connections and relationships. At DILEMA, we collaborate and we discuss concepts and it's a very productive and positive work environment. I think the informal chats that I had with my colleagues are also very important, especially because we have different backgrounds. I have come to realise that the most important things really are connections and people. I have really enjoyed my work with people in the DILEMA group, and Asser in general.’
About Sadjad Soltanzadeh
Dr Sadjad Soltanzadeh is a postdoctoral researcher in ethics and philosophy of technology. Sadjad is working with the DILEMA project with the goal of understanding the legal, philosophical, and moral importance of human autonomy and human agency in the context of autonomous systems. Sadjad has a multidisciplinary background and has experienced diverse workplace and academic environments in Iran, the Netherlands, and Australia. He has masters and Doctorate degrees in Philosophy of Science and Philosophy of Technology. He is also a qualified and experienced mechanical engineer as well as a secondary school teacher.
Sadjad has developed a philosophy, named activity realism, for which he won the SPT2019 Early Career Award. In this philosophy, the study of objects and systems should be preceded by the study of activities of reflective beings, such as humans. As an engineer, Sadjad has been involved in a number of projects, including designing and building robots at the ARAS robotic group, and collaborating in an interdisciplinary research group to investigate the dynamic behaviour of the human heart for fault diagnosis.
Advance your knowledge of the ethics and philosophy of technologies
On Wednesday 8 June 2022, the Asser Institute and the DILEMA Project are hosting a book launch symposium, around the recently-published manuscript of Dr Sadjad Soltanzadeh, Problem-Solving Technologies: A User-Friendly Philosophy (Rowman & Littlefield Publishers, 2022). As technologies become ubiquitous in our everyday personal and social life, it is essential to understand the role and impact of technologies on human activities. Against the background of discussions on approaches to ethics and philosophy of technologies, the symposium will explore the complex interrelationships between technologies and humans, and invite us to re-examine our societal and governance structures. Register now.
Problem-solving technologies: A user-friendly philosophy (2022)
In our everyday activities we use material objects in different shapes and forms to solve various practical problems. We may use a knife to tighten a screw, turn an old washing machine drum into a fireplace, use the edge of a kitchen counter top to open a bottle, or place a hammer on the puncture patch glued to a bike’s inner tube to exert pressure on the patch until the glue dries. If we want to understand the role which material objects play in our everyday activities, we need to move away from universal identifications of objects. This is because universal identifications are not sensitive to contextual differences and cannot describe how each individual user connects to their surrounding objects in an infinite variety of contexts. Problem-Solving Technologies provides a user-friendly understanding of technological objects. This book develops a framework to characterise and categorise technological objects at the level of users’ subjective experiences.
Strictly human: Limitations of autonomous systems (2021)
Can autonomous systems replace humans in the performance of their activities? How does the answer to this question inform the design of autonomous systems? The study of technical systems and their features should be preceded by the study of the activities in which they play roles. Each activity can be described by its overall goals, governing norms and the intermediate steps which are taken to achieve the goals and to follow the norms. This paper uses the activity realist approach to conceptualise autonomous systems in the context of human activities. By doing so, it first argues for epistemic and logical conditions that illustrate the limitations of autonomous systems in tasks which they can and cannot perform, and then, it discusses the ramifications of the limitations of system autonomy on the design of autonomous systems.
The DILEMA project (Designing international law and ethics into military AI)
This project, led by Asser Institute senior researcher Berenice Boutin, looks into the interdisciplinary perspectives on military applications of artificial intelligence (AI). It has a focus on legal, ethical and technical approaches on safeguarding human agency over military AI. It analyses in particular subtle ways in which AI can affect or reduce human agency, and seeks to ensure compliance with international law and accountability by design. The project investigates why it is essential to safeguard human agency over certain functions and activities, where it is most critical to maintain the role of human agents, and how to technically ensure that military technologies are designed and deployed in line with ethical and legal frameworks.