Countries are already utilizing artificial intelligence based weapons in armed conflicts and this use has changed the scope of warfare from being a solely human endeavor. For instance, the United States has operated robots named SWORDS (Special Weapons Observation Reconnaissance Direct Action System) in Afghanistan used to detect and disable improvised explosive devices. However, these robots have limited autonomy and require human control and direction. A sentry robot SGR-1 is also used by Republic of Korea on its border with Democratic People’s Republic of Korea but again it has limited autonomy. The Terminal High Altitude Area Defense System (THAAD) was made to autonomously identify, engage and destroy the short and medium range ballistic missiles using infrared satellite, communication satellite and a group of interceptor batteries. Likewise, an ongoing project called the Perdix System undertaken by the US in 2013 aims to establish armed but unmanned swarms of drones which will be able to reconfigure if any drone or unit of drones are dropped out.[1]
This article will discuss the arguments made towards artificial-intelligence based autonomous weapons – usually referred to as Lethal Autonomous Weapon System (LAWS) or Autonomous Weapon system (AWS) or Lethal Autonomous Robots (LAR)[2] – the challenges posed by these weapons to international law, and the recent endeavors made to regulate the militarization of artificial intelligence.
TYPES OF LETHAL AUTONOMOUS WEAPONS SYSTEMS
Autonomous weapons systems are categorized into three main types
- Fully Autonomous Weapons Systems i.e. those which have no human involvement in detection, engagement etc.
- Semi-Autonomous Weapons Systems i.e. those that are partially under human control meaning they can detect and identify targets but the ultimate decisions such as which target to attack or not are made by human controllers.
- Supervised Autonomous Weapons Systems i.e. those which operate independently but human controllers can intervene at any time of the target selection, engagement and launching of an attack.
TWO DOMINANT SCHOOLS OF THOUGHT:
Advocates of the militarization of artificial intelligence are of the view that autonomous weapons or robots are able to function better, more effectively, and with more compliance as compared to humans. They would not be driven by the “idea of self-preservation” and also outperform humans in precision. In addition to this, these robots are devoid of human emotions such as paranoia, hysteria, fear, aggression, bias and frustration that usually impact human judgment deleteriously and can lead to undesired and unjustified acts. Humans are very vulnerable to “cognitive fallacies” in pressing situations especially where there is no sufficient time to make a rational decision. An example of such a fallacy includes “cherry-picking” where only that information is believed which fits with preexisting notions and beliefs paving the way to flawed decision-making and consequently huge collateral damage.[3] AI-based robots are immune to such premature cognitive closure. In addition, these machines will not act arbitrarily or on the basis of an individual source of information since they would have “unprecedented integration” allowing them to collect data from multiple sources, analyze the situation panoramically and then resort to lethal force, a process that is not possible for humans particularly in the realm of speed. It is argued that considerable research can be done and consensus can be built on the single ethical framework to be given to these machines to make them ethically sensitive in conduct during the missions.[4] However, that can be somewhat difficult keeping in mind the difference in cultural approaches to ethics and the number of philosophies of morality and ethics.
The second school of thought deems the absolute delegation of lethal force decision-making to machines as problematic and argues that machines should not be allowed to violate the sanctity of human life at any cost. Loss of control over the machines would be destabilizing and undesirable because there would be more chances of unintended escalation, failure to terminate the attack on time etc. There are also one-of-a-kind situations that may arise at any time requiring human judgment to alter plans and adapt to them, an ability which pre-programmed machines would lack. Soldiers not only take orders but also are capable of understanding the intention of their commanders. They can understand that the ultimate goal is always materialization of intended results, and not just a blind following of orders, in keeping with altering contexts and circumstances.[5] Robots would not be able to deviate from instructions to execute the intentions or goals. This inflexibility may add fuel to the escalated situations leading to horrendous outcomes. The world has seen in recent times the deadly consequences which vulnerability of technology to malfunction or hacking may pose. The communication barrier between a human controller and machines, in case of fully autonomous weapon systems, may make it difficult to halt the operation in time.
Considerable research has been done and the advocates of these two different paradigms have forwarded many arguments but the world has yet to develop unanimity on either of them.
CHALLENGES TO INTERNATIONAL LAW:
All newly developed weapons must comply with Article 36 of Additional Protocol I which states that every state is under an obligation to review them to ensure that they do not violate international law. The key challenges posed by autonomous weapons to international law are those of accountability and attribution.[6]
Under the principle of distinction, combatants and civilians must be distinguished and only the former may be targeted.[7] It is debatable whether machines that lack human judgment and reasoning, will be able to operate under this principle. The principle of proportionality dictates that the loss of civilian life or injury to civilians must not be excessive in relation to the military advantage anticipated.[8] Whether or not the autonomous weapons can comply with this requires comprehensive discourse by international lawyers, scientists, policymakers, and analysts.
Furthermore, serious reservations exist that rogue states, militant groups, and terrorist organization may use this technology with anonymity, putting lives at stake. No international entity exists to specifically regulate the development and disbursement of autonomous weapons at international level.
Apart from this the major challenge is of attribution and accountability since it is yet unclear who is to be held accountable due to, for instance, a civilian being wrongly targeted by an autonomous weapon system. Attribution establishes that an act considered internationally wrongful emanates from a state in order to establish that state’s responsibility. No state should be allowed to evade responsibility or accountability on the pretext of full autonomy of weapons. The worry is that states will deny that the actions of autonomous weapons are attributable to them and entail their responsibility. Human control must ensure the ability to terminate the operation before things go wrong so that states breaching international law can be held responsible.[9]
Many states, including Pakistan, have called for a moratorium on the production of Lethal Autonomous Weapon Systems (LAWS) due to these challenges. Pakistan has argued before the General Assembly’s First Committee, which deals with disarmament and international security matters, that any weapon which delegates life and death decisions to machines are by their very nature unethical and cannot be in compliance with international law including International Humanitarian Law and Human Rights Law.[10]
ABSENCE OF INTERNATIONAL LAW ON ARTIFICIAL INTELLIGENCE:
The development of autonomous weapons is a recent phenomenon and it is still to be seen what laws will develop, our how current law will evolve, to regulate these weapons. Extensive debates are being conducted and laudable endeavors have been made by the Group of Governmental Experts comprising of the contracting parties of Convention on Certain Conventional Weapons.[11] It is hoped that the GGE will act as a launching pad for the creation of international law and international body to regulate lethal autonomous weapon systems. States are the conventional subjects of and actors in international law, and an aura of reluctance exists around the pioneers of this technology, particularly world superpowers, as to its regulation. This also applies to the United Nations, which operates under their veto powers.
Artificially intelligent entities must be brought under a single international regulatory body and legal framework. This framework must address the level of autonomy permitted to these entities and who would be held accountable in case of any contravention. Anything beyond human controlled entities, which can navigate, detect and engage a target, should be banned. The same understanding was developed at the Group of Governmental Experts’ meetings initiated in 2013.[12] This would assuage fears that humans may be at mercy of machines during fully autonomous and automated warfare. The decision to kill a human must not be at the discretion of a machine and that decision-making power cannot be absolutely delegated. Directions must be specific to ensure accountability, as meaningful control is indispensable and discretionary or arbitrary powers should not be delegated to a machine to ensure responsibility.
CONCLUSION:
The art of warfare has evolved and the emergence of artificial intelligence and lethal autonomous weapon systems has added a new dimension to the ways we fight conflicts today. Militaries around the world are scrambling to get their hands on more and more sophisticated weapons operated by artificial intelligence. The danger of this is the lack of categorical rules of accountability. A code of conduct should be unanimously adopted to govern the research and development of LAWS and an international body with a strong mandate should be created to review and verify commitment to this code. It is important to develop a robust legal framework to actively shape the direction of the militarization of artificial intelligence before it is beyond the control and capacity of international community.
——————————
[1] Ajey Lele, ‘A military perspective on lethal autonomous weapon system’ (30 Nov 2017) www.un.org/disarmament
[2] Karolina Zaweiska, ‘An ethical perspective on autonomous weapon systems’ (30 Nov 2017) www.un.org/disarmament
[3] Ronald C Arkin, ‘A robotocist’s perspective on lethal autonomous weapons systems’ (30 Nov 2017) www.un.org/disramament
[4] Ibid
[5] Ibid
[6] Neil Davison, ‘A legal perspective: Autonomous weapon systems under international humanitarian law’ (30 Nov 2017)
[7] Article 48 of Additional Protocol I (regarded as customary international humanitarian law)
[8] Article 51 (5)(b) Additional Protocol I (also customary international law)
[9] Neil Davison, ‘A legal perspective: Autonomous weapon systems under international humanitarian law’ (30 Nov 2017)
[10] The Nation, ‘Pakistan calls for moratorium on production of LAWS’ (01 Nov 2018)
[11] Amandeep S.Gill, ‘Lethal Autonomous Weapons System’ (30 Nov 2017) www.un.org/disarmament
[12] Nakamitsu, I. (2017). Retrieved from https://www.unog.ch/80256EDD006B8954/(httpAssets)/6866E44ADB996042C12581D400630B9A/$file/op30.pdf
USMAN AHMAD
The writer is a Civil Servant, a student of law and currently working in the Federal Government of Pakistan
Twitter: @Osman77211076