The Ethics of Using Killer Robots in Armed Conflicts
International law is best understood as a system of positive law that is embedded within culture. It is not removed from humans or human society nor does it exist in a vacuum separate from the social contexts within which it operates.[1] As a result, international law should act on or intervene in pressing issues that pose moral or ethical concerns.[2] One such issue is the controversy surrounding lethal autonomous weapons systems (LAWS), colloquially referred to as killer robots, and the ethical implications should their use become commonplace in armed conflicts. This article argues that the fallibility and shortcomings of existing LAWS indicates that their widespread use in conflicts is unethical and explores the current discourse around such weapon systems.
Understanding Existing LAWS
As of yet, there is no internationally agreed upon formal definition for lethal autonomous weapon systems but for the purpose of this text, using the International Committee of the Red Cross’s (ICRC) working definition[3], an autonomous weapon system will be defined as:
Any weapon system with autonomy in its critical functions—that is, a weapon system that can select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy) targets without human intervention.[4]
Currently, they are not in a stage of development that would render their use to be as refined, reliable or sophisticated as existing weaponry. Autonomous robotic systems have several considerable limitations such as unpredictability in functioning, the incapability of making complex decisions, and they lack the capacity to perceive changes of environment. They are also reliant on human input for many functions in order to correct mistakes and a lack of standardization in testing and validation makes them “incapable of operating outside simple environments.”[5] However, they are undergoing rapid technological advancements in a bid to make them more accessible and widely-employed in military realms. It is for this reason that they are said to be the third revolution in warfare after gunpowder and nuclear weapons. For the military, there are three main areas that drive interest in autonomous, unmanned weaponry: the potential to reduce operating costs and personnel requirements, increased safety in operating these systems as compared to manned operations, and increased military capability by using a singular system to perform multiple functions – from identification to attacking targets.[6]
Ethical Concerns
Retaining Human Agency and Dignity in Armed Conflicts
The widespread emergence of LAWS poses serious ethical considerations, namely whether the removal of human beings from the equation leads to the removal of ethical conduct. What happens when life-and-death decisions are delegated to non-human entities?[7] Could these entities ever surpass human capabilities of discernment or abide by international humanitarian regulations such as distinguishing between military targets and civilians?[8] The matter at issue is not that autonomous weaponry is inevitable – as many of its proponents put forth – rather the decision whether to deploy such systems at all and the parameters of their deployment, including the degree of autonomy and the management of this autonomy, is still an open question. .[9]
At the heart of the ethical debate is the concern that the retention of human agency and intent in decisions to use lethal force should not be compromised upon.[10] This is what causes much of the anxiety regarding the loss of human oversight over such conduct.[11] This question has been raised repeatedly in different quarters, including by states themselves, UN special rapporteurs, non-governmental organisations, the ICRC, and among scientific and technical communities.[12] It also transcends the bounds of international humanitarian and human rights law, and delves into the realms of morality. The UN Special Rapporteur Christof Heyns states that: “allowing LARs [lethal autonomous robots] to kill people may denigrate the value of life itself”, and Human Rights Watch notes that such systems would “cross a moral threshold.”[13]
Closely linked to the issue of human agency over decisions to employ lethal force is the challenge of preserving human dignity.[14] The central argument here is that it does not only matter if a person is killed or injured, but the method of “how they are killed or injured, including the process by which these decisions are made” is important as well.[15] The way force is used must not undermine the human dignity of the targets.[16] With an autonomous robotic system, human individuals are merely reduced to inanimate targets that need to be eliminated in the digital scope of these weapons, and this affects not only combatants in armed conflicts but also civilians who (although they must not be targeted) are inevitably exposed to collateral risks.[17]
Accountability Gaps in IHL Violations
Supposing there is further technological evolution that renders lethal autonomous weapon systems reliable and predictable enough for wide-scale use, there are pressing concerns of a possible legal “accountability gap” that could occur in cases of IHL violations.[18] Current laws of state responsibility dictate that not only can states be held liable for IHL violations that occur due to the usage of LAWS, but they can also be liable for deploying LAWS that have not been adequately tested.[19] However, the issue arises due to the lack of human control over these weapon systems. It becomes difficult to ascertain whether or not the humans who program or deploy the systems would have the knowledge or intent needed to be found criminally liable in cases of the machine attacking targets independently and erroneously.[20] Moreover, product liability laws allow for the accountability of manufacturers and programmers for their role in malfunctioning autonomous systems as well.[21] Having to look towards these various actors for ascertaining liability of malfunctioning LAWS creates a complex and taxing legal framework which lacks clarity.
Vulnerability of LAWS to Cyber-attacks
Another factor further complicating the process of assigning liability is the possibility of hacking. Ultimately, any software is vulnerable to attacks or manipulation.[22] This view was reiterated in a 2014 review of US weapons systems when the then-Director of the Director, Operational Test and Evaluation (DOT&E) Directorate Michael Gilmore stated that nearly all tested LAWS are vulnerable to cyberattack, and because these systems are so complex, new vulnerabilities are still being discovered.[23] In the 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems, a question was posed whether “autonomous machines [can] be made foolproof against hacking” – industry experts did not believe so.[24] While existing systems can be scrutinized for coding flaws, a complete elimination of vulnerability is improbable. In such circumstances, is the state or the actors behind the activation and deployment still liable for using autonomous systems that are so inherently vulnerable to cyberattacks that have grave unintended consequences? Moreover, are there any ethical grounds upon which it is even justifiable to make use of these unreliable and vulnerable systems in armed conflicts?
International Opposition to LAWS
Due to these issues, a significant oppositional force has developed against the use of LAWS. Human Rights Watch, along with several NGOs, launched a Campaign to Stop Killer Robots in 2013, citing it as a “grave threat to humanity that deserves urgent multilateral action.”[25] The report shows that a total of 97 countries have expressed their concerns on the question of LAWS, within which a vast majority regard a degree of human control over these systems as critical to their acceptability, and most have expressed desires for a new treaty to mandate the retention of human control.[26] Out of these 97 countries, 30 of them, including Pakistan, even advocate for a total ban on such weaponry.[27] This opposition has led to eight Convention on Conventional Weapons meetings between 2014-2019 in order to discuss and propose protocols that govern killer robots.[28] Countries such as Austria, Chile and Brazil have proposed negotiations for a legally binding international instrument to ensure meaningful human control over LAWS to be retained. However, these proposals are routinely dismissed by a small number of military powers (notably Russia and the United States), hence halting progress as decisions at the CCW are consensus-based.[29]
These conversations around LAWS are still underway and discussions cover the entire spectrum of opinion; whether it is advocacy for the use of LAWS, calls for a middle-ground with meaningful regulation, or an outright ban of further development of this technology. With such diverse standpoints, it is essential for international forums to go beyond politicised debates, start assessing the merits of these arguments, and invoke a collective ethical norm when deciding upon what should be done about the further development and deployment of LAWS.
Conclusion
Antonio Guterres, the UN Secretary-General has called these autonomous machines “morally repugnant and politically unacceptable” and hence worthy of being prohibited by international law.[30] Ideally, the grave ethical concerns posed by the development and deployment of killer robots should prompt the international community to take a collective stance in outlawing such technology in warfare. However in practice, as seen by the views of states with a vested interest in developing and advancing this technology further, a total ban appears to be unlikely. The more probable outcome is likely to be a compromise which attempts to strike a balance between allowing advancements of LAWS and minimising their ethical concerns, by ensuring a degree of human control over these weapons, rather than allowing them to operate entirely independently.
[1] Alexander Boldizar and Outi Korhonen, ‘Ethics, Morals and International Law’ (1999) EJIL, Page 280 http://www.ejil.org/pdfs/10/2/582.pdf Accessed 27th November 2020.
[2] Ibid. at 281
[3] ICRC, Views of the ICRC on autonomous weapon systems, paper submitted to the Convention on Certain Conventional Weapons Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), 11 April 2016, https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system
[4] Ibid.
[5] ICRC Expert Meeting, Autonomous Weapon Systems: Technical, military, legal and humanitarian aspects. Geneva, Switzerland, 26-28 March 2014. Pages 7, 13, https://www.icrc.org/en/publication/4283-autonomous-weapons-systems
[6] Ibid. at 13
[7] Karolina Zawieska, “An ethical perspective on autonomous weapon systems: Perspectives on Lethal Autonomous Weapon Systems” (November 2017). Page 49. <https://www.researchgate.net/publication/323359493_An_ethical_perspective_on_autonomous_weapon_systems_Perspectives_on_Lethal_Autonomous_Weapon_Systems> Accessed 6th November 2020.
[8] Ibid. at 49
[9] Ibid. at 50, 51
[10] ICRC, Ethics and autonomous weapon systems: An ethical basis for human control? Geneva, 3rd April 2018, https://www.icrc.org/en/document/ethics-and-autonomous-weapon-systems-ethical-basis-human-control Page 1.
[11] Ibid. at 7
[12] Ibid.
[13] Ibid. at 8
[14] Ibid. at 10
[15] Ibid. at 2
[16] Ibid.
[17] Ibid.
[18] Neil Davison, “A legal perspective: Autonomous weapon systems under international humanitarian law”, UNODA Occasional Papers, No.30, November 2017, Page 16 https://read.un-ilibrary.org/disarmament/unoda-occasional-papers-no-30-november-2017_29a571ba-en#page1
[19] Ibid.
[20] Ibid. at 17
[21] Ibid.
[22] UNIDIR, “The Weaponization of Increasingly Autonomous Technologies: Autonomous Weapon Systems and Cyber Operations”, 2017, No. 7, Page 10 https://unidir.org/files/publications/pdfs/autonomous-weapon-systems-and-cyber-operations-en-690.pdf
[23] Ibid. at 11
[24] Ibid. at 11
[25] Brian Stauffer, “Stopping Killer Robots” (Human Rights Watch, 10th August 2020), https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and Accessed 6th November, 2020.
[26] Brian Stauffer, “Killer Robots: Growing Support for a Ban” (Human Rights Watch, 2020), https://www.hrw.org/news/2020/08/10/killer-robots-growing-support-ban Accessed 6th November, 2020
[27] Ibid.
[28] Ibid.
[29] Ibid.
[30] Lisa Schlein, ‘Chances of UN Banning Killer Robots Looking Increasingly Remote’ (VOA News, March 25th 2019) https://www.voanews.com/europe/chances-un-banning-killer-robots-looking-increasingly-remote Accessed 27th November, 2020
This article addresses one of the most pressing global challenges on ethics and governance of the deployment of robotic/autonomous weapons, in particular lethal autonomous weapons systems (LAWS). Safa Imran has elegantly summarised and discussed critical issues in the current debate on the deployment of LAWS.
Robotic weapons, which are unmanned, are often divided into three categories based on the amount of human involvement in their actions [1]: (1) Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command; the robot is remote controlled by the operator. (2) Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and (3) Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.The operator has pre-programmed the robot and the robot can operate self-sufficiently.
It is the third category, Human-out-of-the-Loop Weapons, which are classified as “fully autonomous weapon” also known as lethal autonomous weapons systems or “killer robots,” as they would be able to select and engage targets without human intervention.
This article states that “.. product liability laws allow for the accountability of manufacturers and programmers for their role in malfunctioning autonomous systems as well. However, in the United States, civil liability would be virtually impossible due to the immunity granted by law to the military and its contractors and the evidentiary obstacles to products liability suits [2].
Autonomous weapon systems and in particular LAWS, are an emerging class of advanced weapons which utilise technologies associated with robotics and artificial intelligence to track, identify, engage, and attack military targets without human intervention. The main concern is designing intelligent autonomous systems that uphold the ethical values of the society and in the case of killer robots, making the sophisticated practical and moral distinctions required by the laws of armed conflict. AI implementations will have learning algorithms making choices and decisions whether a targeted human lives or dies based on some criteria; then how do we ascertain the limits on these algorithms and how do we review the transparency and accountability behind all that. AI applications raise complex legal and ethical issues in the design, development and deployment of autonomous systems and are the subject of ongoing debate [3]. Researchers have identified a series of ethical principles in the responsible development of AI, including respect for human autonomy, human rights, well-being, democratic participation, transparency, and accountability [4], [5].
Due to the strained strategic relationships between the United States and Russia and between the United States and China, these countries are attempting to leverage AI and develop autonomous systems against this strategic context of strained relations. Thus, military AI arms races are inevitable or even already underway, and the global proliferation of fully autonomous ‘killer robots’ is just a matter of time. From an ethical perspective, LAWS must be totally prohibited on both international and national levels in view of their high-level autonomy and highly unpredictable consequences. Since autonomous weapons are here to stay, what we need is a clear ethical and legal framework, where the legitimate exercise of deadly force should always require a meaningful human control. The burning question is: how can an international, legally binding regulation on the development and deployment of LAWS be brought about.
Citations:
[1] Human Rights Watch (2012). Losing Humanity: The Case against Killer Robots.
[2] Human Right Watch. (2015). Mind the Gap: The Lack of Accountability for Killer Robots.
[3] The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, .
[4] High-Level Expert Group on AI. (2019). Ethics guidelines for trustworthy AI. Brussels: European Commission.
[5] IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. First edition.