Bots at Fault? – Relooking at Culpability and Legal Personhood in the Era of Artificial Intelligence

Artificial Intelligence
Image Credits – Lena Vargas

Gone are the times when watching a robot was surreal! Because what was a dream some years back has now become reality. You can now find a robot in an airport or a school and many more places. On top of everything else, in the contemporary world, we have digital media and devices all around us, likewise, this is the case with Artificial Intelligence because all these devices have robots, be it in the form of Siri or Google Assistant. With the progression in technology, the world is transforming and things we could not even dream of some decades back are taking up the world.

Everyday innovations and ideas are taking the form of reality which have both pros and cons. Artificial Intelligence is an example of such innovations. Humans can no more claim to be the most intelligent and rational beings on earth because their competitors have now entered the field. Competitors here are meant to address robots. This might sound like a false assertion to some but the debate on giving bots (robots) the status of a person has been going on and they might become persons in the future. Statistics show that the number of industrial robots nestled in factories in 2020 touched the mark of about 3 million units worldwide, more than doubling in the past ten years. Now a person commits many crimes which are punishable by law, so will robots also be punished in a way similar to humans if they commit any crime?

Understanding the Phrase – Artificial Intelligence

The first usage of the phrase ‘artificial intelligence’ dates back to 1956. In simple terms, AI is defined as, the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.”

According to another definition, “Artificial intelligence is a scientific discipline aimed at building machines that can perform many tasks that require human intelligence.” 

The US Defence Science Board in its final report of the Summer Study on Autonomy (2016), described AI as “the capability of computer systems to perform tasks that normally require human intelligence (e.g., perception, conversation, decision- making).” The evolutions in the realm of AI have made it possible to make machines learn many tasks done only by humans earlier. In its initial years of discovery, AI-focused primarily on neural networks, then in the 1980s machine learning came into the picture and with ever-increasing innovation, a lot of new technologies have developed under the purview of artificial intelligence like deep learning, natural language processing, and computer vision to name some.

Defining Bots

A bot is a short form of addressing robots. According to (ISO)International Organisation for Standardisation ‘a robot is an actuated mechanism programmable in two or more axes with a degree of autonomy, moving within its environment, to perform intended tasks. The main types of robots are industrial and service robots and the other types of robots are, pre-programmed, humanoid, autonomous, teleoperated and augmenting robots, etc.

Can we grant Legal Personhood to Bots and Artificial Intelligence?

Artificial Intelligence
Image Credits – Aleksei Vasileika

A lot of research has been conducted worldwide pertinent to granting robots the status of ‘legal person’ and fixing on them the liability for a criminal offense if they commit any. There are both proponents and opponents of this theory which suggests that personhood and culpability should be introduced in the bracket of artificial intelligence. A myriad of research papers and articles have been published in this regard and each author has used his/her unique and new theory and notions as a base to further elaborate on this debatable issue.

The proponents of the idea of giving legal status to robots assert that doing so is not meant to bring robots on an equal footing as humans but to fix responsibility when a robot commits a crime. According to the theory of personality of Kelsen, granting legal personhood to something is a ‘technical personification’ only for granting rights and putting liabilities. Furthermore, legal personhood is a legal device to organize the rights and obligations of an entity. John Chipman Gray in his book The Nature and Sources of the Law elucidated that, “In books of the Law, as in other books, and common speech, ‘person’ is often used as meaning a human being, but the technical legal meaning of a ‘person’ is a subject of legal rights and duties.” In India, temples and idols at some places and the whole ecosystem in Ecuador are declared as legal entities.

So, another question which is raised by the supporters of the legal personhood of robots is that if companies, animals, rivers, and lakes could be declared as persons or legal entities, then why cannot robots be? It is a well-known fact that a company always operates as a separate legal entity from its owners and has rights vested in it, some obligations to fulfill, and even signs or stamps documents in its name. Secondly, the campaign for recognizing the rights and obligations of animals like chimpanzees, etc. has also been going on. As far as rivers and lakes are concerned the most recent example is Sukhna Lake, in Chandigarh which has been declared as a legal entity or legal person by the Punjab & Haryana High Court. These instances corroborate the stance of how not only humans but other species or beings could also be treated as legal persons.

In May 2014 a robot named Vital, developed by Aging Analytics, UK, was appointed as a member of the board of the firm Deep Knowledge in Hong Kong. This decision was taken considering Vital’s ability to suggest good investment options showing his good cognitive abilities regarding therapies for various syndromes related to age. In Saudi Arabia, a robot named Sophia created by Hanson Robotics based in Hong Kong in collaboration with Alphabet, a parent company of Google was granted citizenship in 2017. She was also named as the first Innovation Champion of the UNDP (United Nations Development Programme), becoming the first non-human to get awarded with such a title.

Sophia has given a lot of interviews since her activation in 2015 and in 2018 a Wikipedia entry claimed that “interviewers around the world have been impressed by the sophistication of many of Sophia’s responses to their questions, (although) the bulk of Sophia’s meaningful statements are believed by experts to be somewhat scripted”. According to some scholars, providing the status of a legal person to robots can act as a safeguard for humans from the ramifications of the conduct of robots. In the International Tin Council case of the House of Lords from October 1989, it was observed that “the risk (is) that electronic personality would shield human actors from accountability for violating rights of other legal persons, particularly human or corporate.”

 In 2017 the European Parliament adopted a proposal in which it was suggested that self-learning robots can be granted the status of “electronic personalities” which would help hold them liable for any hurt or damage caused to people or property respectively. However, various Artificial Intelligence and Robotics Experts in 2018 sent an open letter to the European Parliament, which was apparent in their disapproval of the European Parliament’s proposal. In this open letter, they claim that legal personality for robots could be derived neither from Natural Person Model nor Legal Entity or Anglo Saxon Trust Model.

Many critiques of granting legal personhood to robots argue mainly on two junctures, the first is that legal personhood carries with it, rights and obligations that only humans are capable of understanding, and robots could not comprehend their importance which weakens their case for getting legal status. The second one is that robots dehumanize humans. There also exists a fear among people that providing legal status to robots will lead to the formation of a society that would contain more robots than humans or where humans might have to co-exist with robots.

Another argument put forward is that the development of technology and experimentation is still taking place in new fields and granting legal personhood to AI for making it liable at this stage might not be necessary. Lawrence B. Solum, while concluding his paper on ‘Legal Personhood for Artificial Intelligences’ stated, “If there is no common ground on which to build a theory of personhood that resolves a hard case, then judges must fall back on the principle of respect for the rights of those who mutually recognize one another as fellow citizens.”

As predicted by Solum, no settled stance has yet been reached on whether AI entities or human-like entities could become a counterpart of humans legally. This might require more research and debate by scholars, researchers, and also the governments of different countries on whether they are ready to give legal status to the artificial entities which might soon be taking up the world?

Can Artificial Intelligence Entities be held criminally responsible?

According to Cambridge Dictionary, culpability means, “the fact that someone deserves to be blamed or considered responsible for something bad”. This has been an intriguing and important question since the debate on personhood and the culpability of AI entities has started. A 37-year-old Japanese employee was killed by a robot who was working near him in a motorcycle factory in 1981. The robot knocked the worker with its immensely powerful hydraulic arm killing the worker immediately as he found him to be a peril to the mission he was on.

In 2018, a self-driving Uber car that was running a test killed a woman who was crossing the road. In cases like the one mentioned above where robots commit crimes and as long as these robots exist with no legal status, they could not be held liable for such crimes. There have been many instances where robots have committed crimes due to technical irregularities or misinterpretations as aforementioned. The argument of making robots legal persons finds strength when the question of putting responsibility for crimes committed by robots or culpability of robots arises.

It happens almost always that the programmer of a robot has to bear the brunt of the negligence or harm caused by robots to other persons even when s(he) did not have any intention of doing so. This is the reason why proponents of legal personhood of robots demand legal status for robots so that the charges for criminal offenses can be pressed against the robot and not the programmer.

Models of Culpability by Gabriel Hallavy

A renowned scholar Gabriel Hallavy has tried to address this issue in his detailed research paper ‘The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control’. He asserts that there are two constitutes of a crime, actus reus (physical act or conduct) and mens rea (mental element like intention to commit a crime). To impose criminal liability on a person both these requirements should be fulfilled i.e., a person should have both committed a criminal act and have had the intention of doing that act. Hillary has given three models of liability that could be used to impose liability on AI entities. These three models are (a) Perpetration-via-Another liability model, (b) the Natural-Probable-Consequence liability model, and (c) the Direct liability model.

The first model theorizes AI to be an innocent agent, and for any offense committed by such agent, the perpetrator-via another is held responsible. A perpetrator via another could either be the developer of say, a robot or it’s the user. Since the programmer could have developed a system in the robot through which it does a criminal act or the user of such a robot might instruct it to do a criminal act. In such a case, either of these perpetrators, whosoever committed the crime through the AI entity would be held criminally liable, rendering the robot free of any liability.

The second model focuses on a scenario where an offense has been committed by the AI entity but it is not the same offense that the programmer or the user had intended to get committed through the AI entity. To put it in simple words the AI entity deviated from the original goal. Now, the real perpetrator, be it the developer or user will be held criminally liable but the AI entity will also be held liable if it did not act as a mere innocent agent then it would also be held liable.

The third and the last model contends that AI entities could directly be held responsible for an offense because they have both the physical ability to act or commit to act which fulfills the requirement of actus reus and AI entities also have cognitive abilities which fulfill the mental requirement (mens rea) to commit an offense. So, where robots were not designed by their creators or not instructed by their users to do an act which is illegal but even then the robot or any other legal entity did so, it could be directly held liable for such offense.

These principles laid down by Gabriel Hallavy are very precise and informative and they could be of great aid to the judiciary too while deciding cases where someone uses a robot, perhaps. If in future AI entities are provided legal status they would have some rights and obligations like humans and as humans are charged with offenses, the similar would be the case with them, that would be the time when these models of AI entity’s criminal liability will help decide AI-related criminal cases.

Conclusion

In essence, the aforementioned debate is of utmost importance to the world in present times. If a new type of entity is sometime in the future going to co-exist with humans, it is really important to regulate the conduct of such entities. We could not let them walk free bereft of any law to regulate their behavior. Personhood is a concept of providing the status of a legal person to artificial entities or other species etc. It is not about including robots into the category of humans but conferring rights and obligations on them like legal persons. No country as of now has enacted any law that confers personhood on AI entities. The next part of this problem is the culpability of robots in criminal offenses which is also a very difficult task. And if no law for culpability in crimes committed by robots is introduced, then it might lead to an increase in crimes by AI entities. It is found that by 2040, robots might be committing a considerable percentage of total crimes around the world.

This data is indeed disturbing and horrifying for some and we might think if this is the case, should we even develop any more robots? There have been so many cases in the past where AI entities have proved to be a threat to humans, like Uber’s self-driving car killing a lady. Distinct scholars have provided theories that might be used to determine the criminal liability of AI entities. Gabriel Hallavy, a distinguished scholar, provided three models for the criminal liability of AI entities. He has also suggested how punishments could be formulated or how punishments like capital punishment or incarceration could be modified to punish robots or AI entities. Due to a plethora of stances on this issue, concluding is difficult. What adds to this difficulty is no present law on AI and robotics. A clear stance could only be made once we have a cornerstone.