The Ethics of Using Killer Robots in Armed Conflicts
International law is best understood as a system of positive law that is embedded within culture. It is not removed from humans or human society nor does it exist in a vacuum separate from the social contexts within which it operates. As a result, international law should act on or intervene in pressing issues that pose moral or ethical concerns. One such issue is the controversy surrounding lethal autonomous weapons systems (LAWS), colloquially referred to as killer robots, and the ethical implications should their use become commonplace in armed conflicts. This article argues that the fallibility and shortcomings of existing LAWS indicates that their widespread use in conflicts is unethical and explores the current discourse around such weapon systems.
Understanding Existing LAWS
As of yet, there is no internationally agreed upon formal definition for lethal autonomous weapon systems but for the purpose of this text, using the International Committee of the Red Cross’s (ICRC) working definition, an autonomous weapon system will be defined as:
Any weapon system with autonomy in its critical functions—that is, a weapon system that can select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy) targets without human intervention.
Currently, they are not in a stage of development that would render their use to be as refined, reliable or sophisticated as existing weaponry. Autonomous robotic systems have several considerable limitations such as unpredictability in functioning, the incapability of making complex decisions, and they lack the capacity to perceive changes of environment. They are also reliant on human input for many functions in order to correct mistakes and a lack of standardization in testing and validation makes them “incapable of operating outside simple environments.” However, they are undergoing rapid technological advancements in a bid to make them more accessible and widely-employed in military realms. It is for this reason that they are said to be the third revolution in warfare after gunpowder and nuclear weapons. For the military, there are three main areas that drive interest in autonomous, unmanned weaponry: the potential to reduce operating costs and personnel requirements, increased safety in operating these systems as compared to manned operations, and increased military capability by using a singular system to perform multiple functions – from identification to attacking targets.
Retaining Human Agency and Dignity in Armed Conflicts
The widespread emergence of LAWS poses serious ethical considerations, namely whether the removal of human beings from the equation leads to the removal of ethical conduct. What happens when life-and-death decisions are delegated to non-human entities? Could these entities ever surpass human capabilities of discernment or abide by international humanitarian regulations such as distinguishing between military targets and civilians? The matter at issue is not that autonomous weaponry is inevitable – as many of its proponents put forth – rather the decision whether to deploy such systems at all and the parameters of their deployment, including the degree of autonomy and the management of this autonomy, is still an open question. .
At the heart of the ethical debate is the concern that the retention of human agency and intent in decisions to use lethal force should not be compromised upon. This is what causes much of the anxiety regarding the loss of human oversight over such conduct. This question has been raised repeatedly in different quarters, including by states themselves, UN special rapporteurs, non-governmental organisations, the ICRC, and among scientific and technical communities. It also transcends the bounds of international humanitarian and human rights law, and delves into the realms of morality. The UN Special Rapporteur Christof Heyns states that: “allowing LARs [lethal autonomous robots] to kill people may denigrate the value of life itself”, and Human Rights Watch notes that such systems would “cross a moral threshold.”
Closely linked to the issue of human agency over decisions to employ lethal force is the challenge of preserving human dignity. The central argument here is that it does not only matter if a person is killed or injured, but the method of “how they are killed or injured, including the process by which these decisions are made” is important as well. The way force is used must not undermine the human dignity of the targets. With an autonomous robotic system, human individuals are merely reduced to inanimate targets that need to be eliminated in the digital scope of these weapons, and this affects not only combatants in armed conflicts but also civilians who (although they must not be targeted) are inevitably exposed to collateral risks.
Accountability Gaps in IHL Violations
Supposing there is further technological evolution that renders lethal autonomous weapon systems reliable and predictable enough for wide-scale use, there are pressing concerns of a possible legal “accountability gap” that could occur in cases of IHL violations. Current laws of state responsibility dictate that not only can states be held liable for IHL violations that occur due to the usage of LAWS, but they can also be liable for deploying LAWS that have not been adequately tested. However, the issue arises due to the lack of human control over these weapon systems. It becomes difficult to ascertain whether or not the humans who program or deploy the systems would have the knowledge or intent needed to be found criminally liable in cases of the machine attacking targets independently and erroneously. Moreover, product liability laws allow for the accountability of manufacturers and programmers for their role in malfunctioning autonomous systems as well. Having to look towards these various actors for ascertaining liability of malfunctioning LAWS creates a complex and taxing legal framework which lacks clarity.
Vulnerability of LAWS to Cyber-attacks
Another factor further complicating the process of assigning liability is the possibility of hacking. Ultimately, any software is vulnerable to attacks or manipulation. This view was reiterated in a 2014 review of US weapons systems when the then-Director of the Director, Operational Test and Evaluation (DOT&E) Directorate Michael Gilmore stated that nearly all tested LAWS are vulnerable to cyberattack, and because these systems are so complex, new vulnerabilities are still being discovered. In the 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems, a question was posed whether “autonomous machines [can] be made foolproof against hacking” – industry experts did not believe so. While existing systems can be scrutinized for coding flaws, a complete elimination of vulnerability is improbable. In such circumstances, is the state or the actors behind the activation and deployment still liable for using autonomous systems that are so inherently vulnerable to cyberattacks that have grave unintended consequences? Moreover, are there any ethical grounds upon which it is even justifiable to make use of these unreliable and vulnerable systems in armed conflicts?
International Opposition to LAWS
Due to these issues, a significant oppositional force has developed against the use of LAWS. Human Rights Watch, along with several NGOs, launched a Campaign to Stop Killer Robots in 2013, citing it as a “grave threat to humanity that deserves urgent multilateral action.” The report shows that a total of 97 countries have expressed their concerns on the question of LAWS, within which a vast majority regard a degree of human control over these systems as critical to their acceptability, and most have expressed desires for a new treaty to mandate the retention of human control. Out of these 97 countries, 30 of them, including Pakistan, even advocate for a total ban on such weaponry. This opposition has led to eight Convention on Conventional Weapons meetings between 2014-2019 in order to discuss and propose protocols that govern killer robots. Countries such as Austria, Chile and Brazil have proposed negotiations for a legally binding international instrument to ensure meaningful human control over LAWS to be retained. However, these proposals are routinely dismissed by a small number of military powers (notably Russia and the United States), hence halting progress as decisions at the CCW are consensus-based.
These conversations around LAWS are still underway and discussions cover the entire spectrum of opinion; whether it is advocacy for the use of LAWS, calls for a middle-ground with meaningful regulation, or an outright ban of further development of this technology. With such diverse standpoints, it is essential for international forums to go beyond politicised debates, start assessing the merits of these arguments, and invoke a collective ethical norm when deciding upon what should be done about the further development and deployment of LAWS.
Antonio Guterres, the UN Secretary-General has called these autonomous machines “morally repugnant and politically unacceptable” and hence worthy of being prohibited by international law. Ideally, the grave ethical concerns posed by the development and deployment of killer robots should prompt the international community to take a collective stance in outlawing such technology in warfare. However in practice, as seen by the views of states with a vested interest in developing and advancing this technology further, a total ban appears to be unlikely. The more probable outcome is likely to be a compromise which attempts to strike a balance between allowing advancements of LAWS and minimising their ethical concerns, by ensuring a degree of human control over these weapons, rather than allowing them to operate entirely independently.
 Ibid. at 281
 ICRC, Views of the ICRC on autonomous weapon systems, paper submitted to the Convention on Certain Conventional Weapons Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), 11 April 2016, https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system
 ICRC Expert Meeting, Autonomous Weapon Systems: Technical, military, legal and humanitarian aspects. Geneva, Switzerland, 26-28 March 2014. Pages 7, 13, https://www.icrc.org/en/publication/4283-autonomous-weapons-systems
 Ibid. at 13
 Karolina Zawieska, “An ethical perspective on autonomous weapon systems: Perspectives on Lethal Autonomous Weapon Systems” (November 2017). Page 49. <https://www.researchgate.net/publication/323359493_An_ethical_perspective_on_autonomous_weapon_systems_Perspectives_on_Lethal_Autonomous_Weapon_Systems> Accessed 6th November 2020.
 Ibid. at 49
 Ibid. at 50, 51
 ICRC, Ethics and autonomous weapon systems: An ethical basis for human control? Geneva, 3rd April 2018, https://www.icrc.org/en/document/ethics-and-autonomous-weapon-systems-ethical-basis-human-control Page 1.
 Ibid. at 7
 Ibid. at 8
 Ibid. at 10
 Ibid. at 2
 Neil Davison, “A legal perspective: Autonomous weapon systems under international humanitarian law”, UNODA Occasional Papers, No.30, November 2017, Page 16 https://read.un-ilibrary.org/disarmament/unoda-occasional-papers-no-30-november-2017_29a571ba-en#page1
 Ibid. at 17
 UNIDIR, “The Weaponization of Increasingly Autonomous Technologies: Autonomous Weapon Systems and Cyber Operations”, 2017, No. 7, Page 10 https://unidir.org/files/publications/pdfs/autonomous-weapon-systems-and-cyber-operations-en-690.pdf
 Ibid. at 11
 Ibid. at 11
 Brian Stauffer, “Stopping Killer Robots” (Human Rights Watch, 10th August 2020), https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and Accessed 6th November, 2020.
 Brian Stauffer, “Killer Robots: Growing Support for a Ban” (Human Rights Watch, 2020), https://www.hrw.org/news/2020/08/10/killer-robots-growing-support-ban Accessed 6th November, 2020
 Lisa Schlein, ‘Chances of UN Banning Killer Robots Looking Increasingly Remote’ (VOA News, March 25th 2019) https://www.voanews.com/europe/chances-un-banning-killer-robots-looking-increasingly-remote Accessed 27th November, 2020