As the second session of the UN’s Group of Governmental Experts convened last month on Lethal Autonomous Weapons Systems to discuss measures relating to the normative and operational framework for emerging technologies, we interviewed Nicolò Borgesano. Nicolò is an Assistant Programme Officer at the Geneva Centre for Security Policy, where he currently conducts research on international humanitarian law, emerging technologies in the military, data and machine learning. He holds an LLM in international humanitarian law and human rights from the Geneva Academy, where he focused on the rules on the use of force in the conduct of hostilities, and treatment of prisoners of war. Prior to his specialization in international law, Nicolò pursued a combined bachelor’s and master’s degree in law at the Catholic University of Milan, and undertook comparative law courses at the University of Technology Sydney and Insubria, to approach the fundamental rights of Australia and Switzerland.
Q1: Many scholars (including yourself) insist on the challenges posed by ‘aggressive’ machines or systems. What does the term ‘aggressive’ mean, in this context? What would then be the differentiation between ‘aggressive’ and ‘non-aggressive’ systems?
The term ‘aggressive’ can be dissected in three different ways, when attached to LAWS.
First, from a jus ad bellum perspective it may refer to the use of LAWS contrary to the prohibition to the use of force, as established by the UN Charter and customary international law. I would therefore describe ‘non-aggressive’ the use of a LAWS for self-defence, in a manner that is necessary and proportionate to achieve such a goal.
Second, ‘aggressive’ may also have an operational/tactical connotation, and refer to LAWS that initiate attacks on the battlefield when unprovoked. Indeed, it may be imprecise to identify as purely ‘aggressive’ a LAWS that is programmed to re-act only when it (or friendly forces or civilians) had been undoubtedly and directly attacked beforehand, similarly to how one may not criminally qualify as ‘murder’ an act of lawful personal self-defence. In the context of the LOAC, this obviously does not mean that the rules would apply unequally to the parties to a conflict depending on who attacked first, nor that ‘defensive’ LAWS would necessarily and undoubtedly comply with the rules of targeting. Notwithstanding, it may still be argued that systems that ‘shoot second’ would, at the minimum, help compliance with the principle of distinction, when compared to purely ‘aggressive’ LAWS. For example, one may consider the concept of direct participation in hostilities propounded by the ICRC, where direct causation and threshold of harm would be paradigmatically present whenever human attacks first. Ultimately, doubts would be confined to the element of belligerent nexus in such cases, which obviously narrows down the checklist for lawful targetability of individuals. (On a side note, and from a policy perspective, I must stress that human targetability is rejected by many organizations, such as the ICRC, who propound to restrict the use of LAWS against military objects ‘by nature’. This approach is generally chosen to help overcoming not only the mutability of human legitimate targets, but also ethical concerns – as I will explain further in Question 4.)
Third, when referred broadly to systems in their interaction with the real world, from a result and technical perspective ‘aggressive’ (but also ‘defensive’ systems) can be juxtaposed to ‘peaceful’ systems that are in no way involved with the use of force (let alone, illegitimate). Thinking about autonomous systems utilized in the military for logistics, transportation and navigation, this distinction helps defining why the biggest humanitarian concerns lie with ‘aggressive’ machines. Another nuance could be stressed in this regard between ‘act’ and ‘selection of the best course of action’. We can think of machine learning algorithms that help with decision-making in the conduct of hostilities, which are not ‘aggressive’ stricto sensu, yet can assist humans in aggressive acts.
Q2: What is a permanent human-centred approach and why is it considered necessary? Also, to add to this, do you think this is possible given cyber warfare might require a response in microseconds which humans will not be able to do/oversee?
Meaningful human control (Article 36); appropriate human judgment (US DoD); human-centred approach (ICRC) are terms frequently employed by organizations and States alike to express the idea that LAWS (or AI, as utilized in the military) should be developed and used under the supervision of humans. There are nuanced differences between these terms. The concept propounded by the ICRC includes elements of human control and judgment, which should be satisfied throughout the entire lifecycle of systems endowed with machine learning. Additionally, humans should understand their capabilities and limitations.
The necessity of human control is generally justified by both legal and ethical arguments. As to the former, keeping humans at the centre reflects the now uncontroversial understanding that IHL applies to humans rather than machines (as also argued by Sassòli, some 10 years ago); and that humans must remain ultimately responsible and accountable for the results of faulty selection processes carried out by algorithms. The ethical basis draws on principles at the heart of IHL, such as humanity and human dignity. The approach is promoted as ‘permanent’, potentially meaning that humans will always be involved, as far as humanity, law and ethics are concerned.
To answer your question on cyberwarfare, let me take a step back to stress some terminological differences between ‘control’ and ‘intervention’. One view is that of Seixas-Nunes, who makes the distinction very clear in his recent treatise on AWS. He explains that autonomy in AWS is the capability to operate without human intervention, throughout the observe-orient-select-act (OOSA) loop. This feature, however, does not exclude control, prior to the deployment (e.g., training of a supervised learning algorithm), and in deactivation procedures (e.g., to comply with the principle of precautions). In other words and as far as this view goes, intervention is just one way to exercise control.
Logically speaking, if a human-centred approach implies human control, and if human control is not limited to human intervention, humans may well exercise control through deactivation to comply with IHL. From here, one can raise the interesting question whether a human can, at all times, be practically capable of exercising control and ‘deactivate’ a system, should it become apparent that the rules of targeting are being, or will be, violated. This would of course depend on the features of the system that is developed/deployed, and ultimately on the factors that trigger the application of force. In light of this, one may argue that developing a weapon that does not allow for deactivation, or that allows for very restricted time for veto by humans (which is generally the case in cyberwarfare as you argue, but may also be seen in purely physical domains – see e.g., the USS Vincennes tragedy) equals to not allowing control, in practical terms. This is one of the reasons why, for example in view of the ICRC, it is so important that systems be adjusted to match human decision-making tempo.
Q3: Do you believe that ethical questions would risk remaining unaddressed by States that do not want a legally binding instrument on autonomous weapons systems?
According to the Automated Decision Research website, there are currently 10 States that do not support a legally binding instrument on AWS. Most of them argue that guiding principles and soft-law regulations would suffice, that the current existing framework of IHL is perfectly suitable for new technologies, as well as that no modernization or adaption whatsoever is needed.
However, those that reject binding rules do not necessarily leave ethics and morality out of the equation. Many States indeed address the matter formally. For example, although the US is strongly against a treaty, the DoD propounds ethical principles on AI in the military, and requires appropriate levels of human judgment over the use of force with regard to autonomous or semi-autonomous systems. The concept finds ethical grounds, as already explained. Another interesting link can also be identified by referring to the UN Group of Governmental Experts on emerging technologies in the area of LAWS, whereby the most recent report adopted by consensus (and, therefore, even by those that do not promote legally binding rules) expresses how their work “continues to be guided by […] relevant ethical perspectives”.
Considering the dual use of AI technology, some States also stress the unreadiness of mankind to negotiate a legally binding instrument. For instance, Poland sees legally binding instruments as potentially hampering progress and beneficial development of the technology for commercial use. By following this rationale, one may argue that placing such legal boundaries would be ‘unethical’, for example in view of the limitation of benefits for future generations.
Q4: Do you believe there is a way for us to use fully autonomous weapons in a way which makes warfare more humane and precise and not lead to the loss of humanity on the battlefield?
The question revolves around ‘fully’ autonomous weapons. If one takes the view that full autonomy is a feature of artificial general intelligence (AGI; or human-based intelligence – HBI), scholars generally argue that systems are still far from being capable of replicating natural learning methods. I would therefore confine the discussion to systems that display ‘some’ autonomy. Without opening the ‘pandora box’ of definitions and terminology, I will refer to autonomy of LAWS’ critical functions, that is the selection and use of force against targets, which excludes any human intervention (but yet does not exclude other forms of control, as I explained in Question 2).
In view of this preliminary note, humanity may be undermined by the use of LAWS under the following two considerations.
First, the rationale for the principle of humanity is to balance out the military necessity sought by parties to the conflict in weakening the potential of the enemy armed forces. This balance finds expression in the use of means and methods of warfare, and particularly in the principles of distinction, proportionality and precautions, as well as rules on their effects on combatants. Humanity therefore permeates the entirety of the rules on the conduct of hostilities. If one takes this view, a violation of any of the rules of targeting would imply imbalance between military necessity and humanity. The answer to your question would ultimately depend on whether LAWS can be utilized in compliance with IHL.
Second, humanity also exists as a principle of its own, and is rooted in ethical and moral considerations. Many argue that decisions upon life and death of humans should rest with humans, for example because humans generally act in view of the consequences (legal, or moral) of their acts. Others, spearheaded by the NGO Article 36, are of the view that encoding ‘target profiles’ brings to dehumanization processes, because humans would be merely perceived as patterns of sensor data. This is one of the reasons why several States and organizations suggest prohibiting LAWS that directly target humans. And this argument exists besides the operational challenges pertaining to the mutability of legitimate human targets, as in the case of direct participation in hostilities (see Question 1). Thus, at least insofar as direct targeting of humans is concerned, LAWS may undermine humanity if humans ultimately do not have decision power over life or death.
Despite these two points, some would approach the problem differently by looking at LAWS in their comparison with human combatants. More specifically, human’s survival instinct may bring to excessive use of force. Very interestingly, Arkin once argued that systems that do not have such a survival instinct would act more “humanely” than humans. There is a further aspect that can be drawn on humanity, interesting for the purpose of this exercise. By speculating over the future of warfare, technology may lead to battlefields being populated by LAWS alone. This would bring to a loss of humanity in warfare lato sensu (or, rather, “depersonalization” of the use of force, as is already the case with drones), which is not necessarily a downside when observable in both parties. As military advantage would become confined to the destruction of hostile materiel, risks for human loss would solely depend on the location of the encounters (e.g., urban warfare), or effects thereof. However, it is hard to imagine perfectly symmetric conflicts where both parties will refuse to delegate human combatants, and possess and employ the same arsenal of weapon systems, resources, and expertise in the operationalization of military technologies.
On a final note, I would tend not to reflect upon systems’ ‘precision’ and thus aidance in the application of the rules of targeting in this rundown, simply because the answer would depend on too many variables, such as the type of environment where the weapon system, ammunition or warhead is deployed; the mutability of the target; the quality of the algorithm’s training, validation and testing; its capability to display valid output in relation to unknown input, recognize anomalies and suspend activities without human intervention; the degree of vulnerability to hacking, jamming or spoofing, just to name a few examples.