Lethal Bots or Soldiers of the Prospect – Will Robot fight our Wars?

Ultron Robot
Image Credits – Social Media

This might sound as straight from a fictional novel, but we may soon have robot armies to fight in human wars. The development of autonomous weapon systems has stirred the hornet’s nest. On one hand, we have the advocates of human rights who oppose this development with all their might, considering the devastating effects AI and technology might have on humanity. On the other side is the camp of techno-pragmatists who vouch for such modern technology that would bring down human casualties in wars with enhanced efficiency. This issue has become a major bone of contention in the international community. With India being nominated to chair a UN group on the future of autonomous weapon systems, its role in the debate becomes a crucial one.

Understanding a Killer Robot

At the onset of this debate, it is imperative to lay down lucidly what ‘autonomous weapon systems’ are. In 2012, a US Department of Defense directive defined such systems as, “a weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system but can select and engage targets without further human input after activation.”

Though we don’t have fully autonomous weapon systems developed yet; however some countries do possess their precursors. Some of these existing systems include cruise missiles, torpedoes, submersibles, robot for urban reconnaissance, Uninhabited Aerial Vehicles (UAVs), and Uninhabited Combat Aerial Vehicles (UCAVs). Most of the current weapons that fall into the ambit of “autonomous systems” have limited capabilities. They are capable of acting independently of immediate human control, i.e. once fired they can determine their own trajectory and pursue the target. But there is some human intervention still required and they are not completely ‘autonomous’. Currently, such systems are employed by several countries for a defensive role, i.e. mainly intercepting incoming attacks. The best example of this is the Iron Dome of Israel. Such systems do not raise any ethical issues. The ones promoting the debates are the intelligent robots. 

The developments in Artificial Intelligence (AI) technology have facilitated the birth of ‘thinking robots’. They enable the robots to think and take decisions at their own discretion intelligently. Their choice of target and how to approach it depends on their programming done beforehand. These robots are programmed in such a way that they have the autonomy to think for themselves and develop their own desires, beliefs, and values. Robots are being programmed to make even ethical choices. These machines also learn from their experiences. This could also imply that after some time, because of the evolutionary process they might become unpredictable as well.

This way, targeted attacks without any human intervention become possible and this is the reason that they have been christened with several names infamously, like “killer robot”, “lethal bots” and “soldiers of prospect”. This raises a string of objections against the deployment of ‘lethal bots’ in human wars.

The Debate over Robot fighting our Wars

Age of Ultron Robot
Image Credits – Social Media

Until now autonomous systems have been targeting missiles and ships. This did not have any serious ramifications. But now since we are on the path of developing such systems capable of selectively attacking humans, there is a major legal debate that unfolds. This might cause an upheaval in the entire conventional warfare. The main questions that arise are: who all are to be held responsible for the war crimes committed by these autonomous systems; what laws do these systems have to adhere to; and another concern is about international humanitarian issues. 

Article 36 of the First Additional Protocol to the Geneva Convention talks about the legality of new weapons developed. This article is forked into two parts, wherein one part contains a rule against the weapons that cannot be limited and the second part comprises of the rule against weapons that cause unnecessary pain or suffering. It is something to be seen if the ‘killer robot’ invented would be able to match the standards set by Article 36. 

Two important laws of war would also provide us with great insights into the utility and validity of such weapons. The law of proportionality dictates that the parties in a conflict should evaluate the civilian cost of accomplishing a military target. If the civilian harm outweighs the military benefit, it should be dropped altogether. Another one, called the principle of distinction, mandates the distinction between civilians from the military assets by the parties and only the military assets can be targeted. Now it is argued by the Human Rights Watch and UN Special Rapporteur on Extrajudicial Executions that its autonomous systems would never be able to meet the criteria set by these laws.

The adherence to these laws requires a subjective understanding and machines would hardly be able to accomplish this even if this is accomplished in near future, it would be unethical and against the principles of humanity to give the precarious responsibility of thousands of lives in the hands of pre-programmed machines. Furthermore, if there is some error and the robot is not able to distinguish between civilians and the military or even between friends and enemies, it could result in a mass genocide with the responsibility hanging in between.

Certainly, these robots cannot be “court-martialed” for war crimes. Using these systems would also increase the likelihood of wars as casualties would not be ‘living’ soldiers but of ‘soldiers of prospect’, at cheap rates and greater efficiency. Morality and conscience are something that we humans are proud of as a distinguishing factor from other species. It is human turf and giving this responsibility to machines is highly unethical. 

But the advocates of using ‘killer robot’ in wars argue that these pre-programmed machines might be able to adhere to the international norms even better than humans with minimum errors. Since these machines would be devoid of human emotions, their judgments might be even better and free of all human follies of greed, fear, and hatred which drive humans to commit atrocities. These machines might be able to better distinguish between civilians and the military. Additionally, with further developments in this field, robots might be able to make better decisions than humans in a short span of time. Their targeted nature would result in efficient attacks with minimum casualties. Also, families would better support employing robot machines for fighting on the front than sending their loved ones and risking their lives. 

Thus, both sides of the debate present some strong arguments. Entities like Human Rights Watch, UN Special Rapporteur on Extrajudicial Execution, and countries like Pakistan have been demanding a preemptive ban on the use and development of autonomous weapon systems on humanitarian grounds and other objections. On the other hand, countries like the UK and US are in the favor of regulations but reject an outright ban. This issue has also been the UN’s top agenda since 2013. The main hotspot of this debate has been the Convention on Certain Conventional Weapons (CCW) in Geneva. There have been three informal meetings of the CCW countries and the deliberations are still going on.

India’s Policy on Killer Robot

India’s stance on the issue becomes significant in light of the fact that India was set to chair a UN group to discuss the future of autonomous weapon systems. India, unlike other countries, has not taken an extreme position. It has taken a balanced approach, favoring a strict regulation to curb any resulting humanitarian crisis that might unfold and not suggesting a strict ban. It said, “Increased systemic controls on international armed conflict in a manner that does not widen the technology gap amongst states or encourage the increased resort to military force in the expectation of lesser casualties or that use of force can be shielded from the dictates of public conscience.” India has taken a cautious approach warning the world against the proliferation of international security due to the development of autonomous systems and resulting issues which need immediate attention. 

India’s focus has been primarily on disarmament traditionally. Consequently, India has been often called a ‘soft state’ as it is never an aggressor. However, over the years there has been a shift in India’s defense policy owing to changes in the attitude of its two hostile neighbors. There has been an increased focus on the question of the nation’s security. Indian soldiers are forced to fight in extreme weather conditions at high altitudes protecting the country’s frontiers.

As a result, several of our military personnel have lost their lives. In this context, the use of autonomous weapon systems seems enticing. They can be deployed for 24 hours every single day of the year without tiring in any weather condition. With its powerful neighbor China developing such systems, it becomes imperative for India to prepare itself for any unforeseen event that might arise in the future. Such systems would also be beneficial for protecting India’s growing space.

However, it might hinder India’s image as a “humanitarian principles’ advocate” since such systems are increasingly being referred to as ‘unethical’. Also, deploying these systems would require greater permanent infrastructural development at the borders that can be considered as an aggressive move inviting grave repercussions. Moreover, India is still a predominately arms-importing country. There are wide technological gaps that need to be filled before India ventures onto this path. Since India is not a signatory of any international treaty like the Ottawa Treaty or the First Additional Protocol to the Geneva Conventions that rule this area, it might be easier for India to tread on this path. However, it should keep in mind the opinion of the international community on this issue and should strive to conform to the international standards. 

Nonetheless, it seems that an outright ban on such systems would not be in India’s favor. India should tread carefully on this path. Even though there is no clear international convention on the issue, it would be in India’s favor to develop an indigenous policy on the issue.

Conclusion

Considering the benefits of ‘lethal bots’, it does not seem practical to outrightly ban them. Even though they pose some serious threats to humanity, rejecting them completely wouldn’t be a feasible option. The development of autonomous weapon systems is currently at different stages in different countries. With the countries engaged in an arms race, the reality of ‘killer robot’ is not far.

Thus, there is a pressing need for stringent regulations because even though there are a number of advantages of these systems, their downfalls cannot be ignored. Controlling them by adequate regulations seems the key. In the context of India, it is a great prospect to steer its defense policy on a new path. Great scholarly personalities like Stephen Hawking and Elon Musk have even said that such weapons have the potential to become “the Kalashnikovs of tomorrow”. Thus, we need to tread on this path very cautiously so that this technology does not rise as the Frankenstein monster.


Editor’s Note
This article discusses one of the topmost agendas of the UN since 2013 i.e., the use of autonomous weapon systems, particularly lethal bots, in warfare. The author has firstly explained what a killer robot is. The author has then critically analyzed arguments from both sides of the debate as to whether killer robots should or should not be used in warfare. The author has also thrown some light on India’s stance regarding the same. Lastly, the author has concluded by saying that we need to tread on this path cautiously. These systems have their own advantages as well as disadvantages and so they should not be outrightly banned; however, they should also not be without any regulations.