As the second session of the UN’s Group of Governmental Experts convened last month on Lethal Autonomous Weapons Systems to discuss measures relating to the normative and operational framework for emerging technologies, we interviewed Ousman Noor from the Campaign to Stop Killer Robots to ask him a few questions about why they should be prohibited and why it’s not too late to do so. Ousman is the Government Relations Manager at Stop Killer Robots, a global coalition of 200+ civil society organisations across 70+ countries, working towards a new international treaty on autonomous weapons systems. Ousman is based in Geneva and engages with the diplomatic community at the United Nations. He is also a human rights barrister and practiced law in London for 10 years, where he served as a senior teaching fellow at the University of London. He also has a Master’s degree in social anthropology from the University of Oxford.
Q: To start off with, could you explain for those who don’t know, what LAWS are and why are some states arguing they should be banned?
LAWS stands for Lethal Autonomous Weapons Systems. International discussions at the United Nations in Geneva began on LAWS in 2013, with regular meetings taking place at the Convention on Certain Conventional Weapons (CCW). However, since then, most States have argued that the ‘L’ in LAWS should be dropped, and we should instead be discussing Autonomous Weapons Systems (i.e. AWS), because systems can be non-lethal but still be illegal e.g. if they are indiscriminate or designed to cause superfluous injury. The Stop Killer Robots campaign agrees with this view, and we now refer to AWS instead of LAWS.
We consider AWS to be systems that use sensors and the processing of sensor data to select and engage targets with force, without human intervention. This means that the human user of the system does not determine where, when or against what, force is applied. Instead, the execution of force takes place autonomously, by capturing data from the external environment, processing that data, and matching it with a predetermined target profile. Essentially, the role of the human operator in executing force is replaced by machines.
AWS pose a range of serious legal, ethical, humanitarian, security, and environmental risks. By replacing humans by machines in the application of force, the case sensitive control, judgement and supervision that is required to make decisions is lost through the autonomous process. The United Nations Secretary General, the International Committee of the Red Cross, thousands of experts and Artificial Intelligence and technology, faith leaders and civil society organisations around the world have been calling for a new international legally binding instrument on AWS to safeguard against these risks.
Over 10 years of discussions, the number of States now calling for an international legally binding instrument now stands at 91. The very first country to make the call was Pakistan, and now the number goes up every year. We expect that negotiations will be launched in the period ahead.
Q: What do you make of the counterargument that LAWS can actually comply more with the laws of armed conflict, in that, to quote Christof Heyns, they “will not be susceptible to some of the human shortcomings that may undermine the protection of life. Typically they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape”, basically this notion that the inhumanity of war can only be brought about by, well, humans?
It is correct that it is humans that are responsible for the atrocities associated with war and conflict. Throughout history, humans have manipulated the external environment to create tools of war and oppression and used them to dominate other humans. The military motivation for States to develop and use AWS is for that purpose, to minimise the cost of going to war and maximise damage capabilities to others and gain military superiority. AWS would enable States to inflict force using machines without any human control. Of course, machines are not able to make ‘decisions’ themselves, as they are not human. So, it would just be mechanical slaughter, whereby machines do the killing, but it is humans that have engineered and programmed the machine to do the damage, at a time and place far removed from when and where the actual killing takes place.
The use of AWS does not sanitise the problem of human shortcomings, it just allows those shortcomings to manifest in exponentially greater ways. For example, it would be possible for an AWS to be engineered and programmed to kill all people of a certain colour, ethnicity, gender or age at a particular location. Neither does the use of AWS reduce the prospect of humans raping other humans, they just enable humans to exert greater dominance over other humans by using an AWS.
Here, it is also important to state that Stop Killer Robots is not anti-technology or anti AI. On the contrary, we use technology and AI to make our lives better. We believe that technology should be developed in ways that promote shared human values and dignity. There are even many advantages to technology and AI in the military, for example to improve accuracy, tracking and detection or conduct reconnaissance. However, allowing machines to kill people based on that human’s biodata without human control, would be unlawful and is degrading of human dignity, and why we are against AWS that target humans directly.
Q: Some argue against the prohibition of LAWS on the basis that they’re already here. We had the first documented use of a slaughterbot in March 2021 and the first use of a drone swarm in combat shortly after in June of that year, so if the technology is already here and states are using it shouldn’t we focus on its regulation rather than prohibition?
AWS have been around for many decades. There are weapons systems that autonomously select and engage targets without human intervention that have been used by militaries in lawful ways for a long time. For example, the Phalanx Close-In-Weapon System, which is used on ships, can select, and engage incoming missiles within seconds without human intervention. We don’t have a problem with these.
Stop Killer Robots is not arguing that all AWS should be prohibited, only ones that cannot be used with meaningful human control, or that target humans directly. With the Phalanx, the system is used to target incoming missiles, generally in open water, (so away from civilian population), for a fixed duration of time, with limits on its range and has a human supervisor who is trained on how to use the system. The system is therefore being used with meaningful human control, and is not targeting humans directly, and would therefore be permissible.
If an AWS was used that did not have limits on what it would target, how long it would operate for, what scale of force would be applied, and it was deployed in a civil population, then it is probably not being used with meaningful human control. That is the type of system that we argue should be prohibited. There are a range of factors that should be considered in determining whether meaningful human control is being used, including how predictable, understandable, explainable, reliable, and traceable the system is. We also need to consider what types of limits and obligations relating to the system, such as ensuring that the system can be deactivated to prevent unlawful attacks, and ensuring the operator is properly trained in how the system functions, so that they understand what circumstances would trigger an application of force.
For these reasons, we need a combination of both prohibitions and regulations to ensure meaningful human control over the use of force. Without prohibitions, there would be no red line and no way to distinguish between unacceptable and acceptable systems, meaning that all AWS would potentially be permissible, even if they can’t be used with meaningful human control.
Q: The debate around LAWS is sometimes being argued in crude Global North vs. Global South terms which ignores the fact that the first documented use of LAWS was by a Turkey made drone against retreating combatants in Libya, is this dichotomy incorrect here? Also what does this tell us about the fact that some people argue that technology can be ‘the great leveller’ in how less powerful states may be able to build their expertise in this and use them in combat?
AWS will proliferate and be available to be used by everyone, including non-state actors and individuals, unless we urgently establish an international legally binding instrument. In this sense, yes AWS could eventually become a ‘great leveller’ if everyone has access to them. This would precipitate the greatest security challenge the world has ever faced, as many experts in AI and technology have warned. In theory, an individual could use an AWS to conduct a massacre in a school, targeting children of a particular skin colour. A non-state actor could decimate an entire civilian population in a city, targeting only women. An assassin could programme an AWS to wait outside your place of work, and execute you based on your biodata. An invading State could execute every human within a particular location. Once such systems have been permitted for development and use, and mass production starts, it would be very difficult to then eliminate them. That is why our work is so urgent.
At the moment, it is highly militarised States that are investing in these technologies, and they may gain a short-term military advantage as a result. However, the entire world will suffer if such systems are not properly prohibited and regulated through an international legally binding instrument. In the long term, there is no military advantage a State gains over another, because the technology and software will be available everywhere. On the contrary, there are only internal and external security threats associated with their proliferation.
This is why Pakistan, and at least 90 other States, have argued in favour of an urgent international legally binding instrument to prevent the catastrophic security consequences of allowing these systems to proliferate, absent clear prohibitions and regulations.
Q: There is a large degree of cynicism about the workings of the GGE, particularly since these discussion have been going on for 10 years now and still no consensus has been reached, in that time technology has advanced and I do wonder that given how quickly we’ve accepted AI in the form of ChatGPT into our everyday life and consciousness (which is quite alarming really) that we won’t be able to spare hostilities from the AI revolution, what do you make of this?
It is correct that progress at the GGE (the Group of Governmental Experts meetings at the CCW forum) has been difficult, this is because it is a forum that requires every State to agree on the outcome in order to make progress. So far, despite the vast majority of States calling for negotiations of a legally binding instrument, a handful of highly militarised States are blocking progress. Nonetheless, the CCW forum has enabled detailed policy discussions to take place, and there is now significant policy coherence on identification of risks, characterisation as well as the types of prohibitions and regulations that are required.
In 2023, States are now considering how to capture this policy coherence in a forum that can progress the global normative and operational framework on AWS, and build momentum towards an international legally binding instrument. This year, we expect States to work together towards a resolution at the United Nations General Assembly. This would be a significant step forward. We also expect the United Nations Secretary General to continue to actively encourage States towards launching negotiations. We aspire to launch negotiations on an urgent basis, as each year we delay, the risks increase.
While there have been frustrations along the way, Stop Killer Robots is in very good form and optimistic about the impending launch of negotiations. Already in 2023, several international conferences have taken place including in the Netherlands, Costa Rica, Luxembourg and additional conferences planned in the period ahead. These conferences have injected huge momentum to the cause, and we believe it is inevitable that negotiations will be launched soon. States, including Pakistan, should consider how best to lead and influence such negotiations to ensure that the legal, ethical, humanitarian and security risks posed by AWS are solved through establishing clear international rules and regulations on AWS.
At a time of heightened global insecurity, we now have a collective opportunity to establish new benchmarks in the use of force, protect human dignity and create a more peaceful world now and for future generations through establishing an international legally binding instrument on AWS.