AI and Legal Personhood: Should the law recognise AI as persons?

AI and Legal Personhood: Should the law recognise AI as persons?
Image Source: https://tinyurl.com/39rw74v2

Introduction

Often when we as legal scholars write about AI, the major risk is that we create, destroy, discuss or dissect AI law without adequately understanding AI itself and its various technical details or complexities. It is with great pleasure, then, that we have all watched this situation play out in reverse lately with the LaMDA incident (can we say scandal yet?)

If you need background, I wrote an article here summing it up but can do so simply here in a sentence or so. Essentially, an engineer at Google had concerns that a large language model was ‘sentient’ and expressed those concerns first to Google, and then to the public. At this point he was placed on leave.

The idea of machine sentience has captured the imagination of the public for a long time, and so a qualified engineer making the claim has gone up like a spark in a powder keg.

While most sensible minds are calling for calm and the arrival of suitable facts, with a hefty dose of ‘sentience’ scepticism being offered by leaders in both the AI and the Ethics field, there are also considerable conversations being made about giving AI ‘rights’ and making them ‘legal people’.

Facing public misunderstanding of the law is commonplace for us in the profession, but it is unusual to be explaining the intricacies of law within an ongoing, impassioned AI debate. For that reason, I thought I’d offer an overview of the different types of ‘person’ the law can recognise, pros and cons of actually giving AI legal personhood, and whether or not this recent sentience uproar has (or could have) any weight onto how the law treats AI. For this post, I will be using AI in a broad sense to mean intelligent software - not necessarily a physical being or robot. For reasons of interdisciplinary simplicity and general brevity, I will use AI and machine interchangeably, and they are not necessarily different categories. Expect wide scope.

Does this Unit have a soul?

Apart from providing a fantastic opportunity to quote the Geth from Mass Effect, it’s also important to clarify here what the meaning of ‘personhood’ means in law. When the idea of legal personhood for AI comes up, it’s usually emotion-driven on either side of the spectrum without actually considering what legal personhood in law entails. The idea of non-human persons is actually a lot more common than you would expect. We already have non-human persons with legal rights and obligations. The most common examples of these are limited companies, corporations, and organisations. The more unusual examples are some rivers or even trees which are, in the eyes of the law, persons.

These are important examples, because it shows that there’s a major difference in personhood and sentience. A corporation is not sentient. It cannot act on its own, decide its own matters, and it doesn’t even really physically exist. It’s an idea, shared by a multitude of people to become real by proxy - that proxy being the humans who act on its behalf. Its legal entity is completely separate from its shareholders. The property it owns is owned by itself, not the people who run it. It can enter into contracts as itself, but it's the humans who actually negotiate and sign them.

Rivers are not sentient. Trees kind of are. Maybe. Nevertheless, the legal personhood examples of both require humans to act on their behalf. Ants and Cockroaches are sentient, and yet do not have legal personhood. You can stamp on all the ants you want, and they (nor their elected representatives) have no legal recourse. You still shouldn't though, because it's mean.

It’s vital to specify this difference because people often jump to the conclusion that if machines are sentient, that means they become persons. Or if some machines become persons, that makes them sentient. In law, this is sometimes known as the Ontological-Consequentialist debate and you can find a very high quality primer here. Philosophers may recognise both of these terms, and they mean pretty much the same thing in this legal debate. Ontological philosophy believes that personhood should derive from what intrinsic abilities and properties are shown by the AI (ie, that a sentient AI should be bestowed rights for ethical reasons), and Consequentialist philosophy believes that rights and responsibilities for AI should be bestowed in order to solve specific problems - ie for the consequences that those rights or responsibilities would create. These aren’t really either/or philosophies either, but there’s widespread agreement within the legal field that at present we’re firmly within the realms of Consequentialism.

Like most things in law, legal personhood is currently a system which would be designed to solve a problem. For companies and organisations this was the attribution of a consistent identity which doesn’t change when stakeholders do, and allows the company to enter contracts, buy and sell items etc. under that consistent identity. It solves the problem of changing stakeholders and operational logistics. For some other examples like rivers and trees, it often solves complex conservation problems and results in increased resources for the protection of valuable natural entities.

That’s the part of the debate people often forget, especially in public discourse. The question of whether or not to give machines an element of legal identity isn’t a moral, ethical, or philosophical question at all. Legal identity and personhood are legal tools and as such, the question of personhood is a legal question.

A whistlestop tour

It’s worth a quick guide tour around the concept of ‘legal personhood’ before we begin. Legal personhood comes in two main (and perhaps one extra) flavours.

Natural Persons - Humans are natural persons, as they are capable of understanding and carrying out their rights and obligations. When a person is born, they naturally gain personhood by virtue of their humanity.

Juridical Persons - Juridical Persons are created artificial persons, such as corporations or NGOs, which are controlled by proxy humans.

A possible third type is the Anglo-Saxon Trust model which operates in some jurisdictions, and is a complex type of tax and asset planning/protection originating in the 13th Century. This doesn’t have too much relation to this post but is worth highlighting.

It gets even more complicated when we consider that just because someone is a person, it doesn’t mean they have rights, and just because something is a legal entity, it doesn’t mean it’s a person. Consider that there were periods in history where groups of people were considered ‘natural persons’ due to being human, but didn’t have full legal rights - for example slaves, ethnic minorities, women, and the poor. These were people but did not have full legal identity. Conversely, robots like Sophia have been granted full citizenship in nations such as Saudi Arabia. This is not a person, but has a legal identity. When we consider duties instead of rights, we can provide examples where just because someone has legal identity (and rights) but not duties. Children, the severely mentally ill or psychologically unwell, those with developmental disabilities, and more all have rights but are not considered to necessarily have duties given their limited understanding of their actions. Legal personhood with AI is set to be complex.

In fact, it is already technically possible for an AI to have legal personhood - at least in some legal systems. US legal scholar Shawn Bayern showed this in his research, but it is best described in Barfield and Pagallo’s ‘The Law of Artificial Intelligence’ who describes the procedure thusly:

“ [...] it is possible to confer legal personhood on a computer system in the US, by putting it in control of a limited liability corporation (LLC). If that legal maneuver is upheld in courts, theoretically, AI systems would be able to own property, sue, hire lawyers, and enjoy freedom of speech and other protections under the law. The way this may occur is that AI rights start with one person setting up two limited liability companies and turning over control of each company to a separate autonomous or AI system. Then the person would add each company as a member of the other LLC. In the last step, the person would withdraw from both LLCs, leaving each LLC - a corporate entity with legal personhood - governed only by the other’s AI system. That process does not require the computer system to have any particular level of intelligence or capability. It could just be a sequence of “if” statements looking, for example, at the stock market and making decisions to buy and sell based on prices falling or rising. It could even be an algorithm that makes decisions randomly.”

It’s already possible to bestow upon AI limited imitations of legal personhood, but there are debates as to whether to create a new category.

We return to the initial questions then - should we give machines a degree of personhood? Why would we? What would it accomplish? Is it the best choice?

Reasons for giving machines personhood


The idea of giving machines a degree of rights and obligations is not a new one. In the European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), the idea of giving robots their own legal identity was floated. It stated under Section 31(f):


31. Calls on the Commission, when carrying out an impact assessment of its future legislative instrument, to explore the implications of all possible legal solutions, such as:
[...]
“creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;”

This section was never adopted into the final product, but it does a pretty good job of shortlisting a few of the good reasons for providing AI legal identity. Quite apart from ‘honouring sentience’ or any of the other most common reasons given in the media and the public sphere, the simplification of liability issues is a major draw. Yet not the only one. In helpful list format, here are a few of the major positives:

Liability


Civil liability decides who bears the economic consequences from an accident, incident, or disagreement. When an AI hurts someone, physically or financially, finding out who is responsible can be difficult. This is especially true when the AI is owned by an individual, rather than a company. If you buy a self-driving car, and it runs someone over when you’re not even in the car (perhaps it was coming to pick you up from somewhere), are you responsible for that? You’d be right in thinking that insurance could cover this, but it would be your insurance - not the car’s. Giving AI an element of legal personhood in some cases would help liability in a twofold manner.

Firstly, it would help streamline the process of claiming damages. Liability is complex anyway, but we are likely to increasingly see people attempt to dodge or otherwise muddy liability by claiming their AI did something it wasn’t supposed to, and was not designed to do, and therefore due to it not being foreseeable or authorised then they are not to blame. Someone hit by a driverless vehicle, whose money is blown by an automated finance AI, or discriminated against by a hiring algorithm mostly cares about the speed and ease of their restitution. The ability to sue the AI directly would make things easier for people - especially those without the resources to pursue cases for long periods.

Secondly, it would more fluidly enable other, secondary schemes. One of these is a compulsory insurance scheme, proposed by Ms Delvaux, the MEP for Luxembourg:

“In the long run, determining responsibility in case of an accident will probably become increasingly complex as the most sophisticated autonomous and self-learning robots will be able to take decisions which cannot be traced back to a human agent. For these cases, the report asks the Commission to carry out an impact assessment for a compulsory insurance scheme, which includes the possible idea of giving the legal status of an electronic personality to robots in order to facilitate compensation for victims when human responsibility cannot be fully attributed. Liability is in fact a central part of this report, because it is indispensable for citizens’ trust.”

Liability is a huge subject, and the rules surrounding AI liability are equally complex. I may undertake another post about this specific topic in the future. Suffice for now to say that liability isn’t an all-in or all-out binary choice either. You could provide the ability for AI to become responsible for its behaviour to a degree, without going all-in on providing full legal personhood. This leads neatly into a potential other benefit.

Dealing with AI Criminals

Okay, the title here is a little clickbaity. I couldn’t resist. But it’s not wrong. Automated systems can and do break the law. AI systems have in the past purchased illegal narcotics and fake passports on the dark web. Some have also directly killed innocent people. The best we can do in these situations is try to prevent the crime getting any worse, and then pursue charges against whoever may be responsible. Just like in liability, however, finding out who is responsible can be an extremely complex, and sometimes impossible, situation.

It also raises issues. An AI can commit a crime by itself without its creator or owner knowing. Someone can also create AI to commit crimes and try to dodge the blame by blaming the software. How do we tell the difference?

Legal personhood here could help. It would allow the AI’s act to be treated as criminal in itself. The idea of making an AI a ‘criminal’ may seem an odd one to outsiders, but whether or not something is defined as criminal matters a lot when it comes to the law. Whether assets can be seized, something destroyed, and the rights of those who are harmed can often hinge on whether they are a victim of a criminal or a tort wrong. For example, if someone releases sexual media of you without your consent that is a crime, and you get support as a victim of crime. However, if someone deepfakes your face onto pornography and releases that, it's not a crime. And you get no victim support. A strange example, but it shows that in law definitions matter.

Unfortunately, the wider public can often focus on the ‘punishment’ aspect of criminal AI a bit too much. There’s even been focus on how we would cause an AI to suffer in order to ‘pay’ for its crime. In my opinion this is missing the point. In criminal law (my own area), ‘punishment’ isn’t usually the focus so far as reformation is. Mostly, when the judiciary make decisions on punishment they do so to correct behaviour both for that individual and as a message to others. For AI, their behaviour can be corrected by code, so punishment serves no real purpose other than revenge - which I believe has no place in law.

Robot Wages

This one may sound a bit ‘out-there’, but as automation increases and the population level does the same, we can expect more of a reliance on social welfare in whole or in part. Most nations have a ‘top-up’ system for those who work part-time. In the UK, this is Universal Credit. If you get £1,100 of Universal Credit entitlement per month, and you make £700 in wages, you’ll get £400 from Universal Credit to bring you up to that £1,100 threshold. With increased automation we may well see a reduction in what we expect the average labour of a person to be. Perhaps we start working 4 or even 3 days per week. Perhaps shorter workdays, given that we can offload much of the more time-intensive work to increasingly capable automation?

This poses an obvious problem. How does our economy keep turning? As employment decreases, this affects income tax revenue (decreasing) and welfare costs (increasing). Legal identity could help solve this problem. AI could be ‘paid’ in cash or assets, which would help generate tax revenue to support human society, whilst also meaning that advanced AI systems also have their own assets with which to pay for their own insurance and, in the event on non-insurable or criminal acts, assets with which to reimburse harmed parties. It could also help inflation by reducing the cash in ‘normal’ circulation, though I have literally zero economic insight or knowledge - so that one is a true spitball!

Reasons Against AI Personhood

Like all things, there are as many reasons not to provide some intelligent systems with legal personhood. Just like support for the idea, criticism at the idea has been levelled across academic and industry boundaries by experts in tech, law, ethics, and policy. These generally cover a few categories:

A Potential Step Backwards

The law has a long and storied history of inappropriately assigning legal personhood to things. In fact, throughout the middle ages and early colonial period many animals were tried in court and some even imprisoned or executed for their so-called ‘crimes’. This isn't merely relegated to the history books either. In less developed areas of the world this can still happen. Not long ago police in Nigeria arrested a goat for the crime of attempted armed robbery, believing that the goat had used witchcraft as part of its crime.  There are many who feel ascribing legal personhood, especially too early, to machines could cause far more problems than it would solve and would represent a step backwards for the legal field.

This is not just luddite thinking, either. It poses some potentially real problems. Historically, our legal and social concept of ‘human’ and ‘person’ has usually been synonymous. By separating these two our concept of a person, in the words of Solum, “may change in a way that creates cleavage between human and person”. This is important because it also offers the potential for backsliding. If not all people are human, then perhaps not all humans are people. One could see how this could be potentially dangerous in a world with rising levels of hatred and neo-fascism.

Too Great a Deviation for the Law

There are many who feel that the giving of machines select personhood statuses would be too great a deviation from standard legal practice and established precedent. Machine personhood would not fit neatly into either natural or judicial personhood, nor would it be appropriate for similar models, so we’d likely need to create new categories.

There is also some speculation that extending human-like rights to machines would cheapen or even break the ECHR and UN’s charters on these matters - though I do not personally subscribe to this view.

Typically, the legal field does not like to move quickly in leaps and bounds. It is not the ocean, with great movements and high waves. It is a river which slowly changes its course by boring away at rock over a long period of time. Terrible analogy, but I like it and it does its job! Such a great leap here with the creation of whole new categories of personhood would be more of a ‘last resort’ than anything else, and so critics say that other, more grounded, methods should be tried first. This leads us neatly into the next criticism.

Makes the Situation Worse

There can be the point made that there is potential for AI personhood to make the situation worse instead of better. In their Nature paper ‘Romans Would Have Denied Robots Legal Personhood’, Floridi and Toddeo state that “attributing electronic personhood to robots risks misplacing moral responsibility, causal accountability, and legal liability regarding their mistakes and misuses”. There is a good grounding for this, and it is one of the many risks of providing AI with full or limited personhood.

Additionally, one of the odd little patterns we see in nascent technologies is that metaphors can have significant legal power. Sergio and Negri touch on this in their paper on robotic personhood, where they explore how drones being equated with aircraft led it down a route of air regulations which stifled innovation and usage of drones as a technology. Equating surgical robots to laparoscopic surgery (microsurgery using a camera), however, had the opposite effect - leading to a speeded up process of approval. In short, how both we in law and how the public merely *perceive* AI personhood can have a regulatory influence.

Sergio and Negri touch on this too, stating:

“The projection of human characteristics on robots does not depend on their form. Even when a robotic artefact has no anthropomorphic shape, people project onto these technologies human qualities such as consciousness and intelligence. As the autonomy of the system increases, making connections between the inputs (its commands) and the behaviour of the robot difficult, analogies with human beings are reinforced, which, in turn, can hinder any normative attempt, whether in terms of ethical debate, or in legal matters, such as the determination of who would be liable for possible damage caused by robotic artefacts”

Their paper is well worth reading in full as it touches on some new areas not often considered in law, but potentially the part that jumps out most in regards to the potential to make the situation worse is this - the more anthropomorphic a robot is, the more people blame the robot, rather than the person using it., and humans can suffer distinct emotional damage from the loss of robotic fellows.

With this in mind, by giving AI legal personhood we could accelerate future problems for ourselves.

Not Necessary

The most universal criticism is that this isn’t a necessary step, and that there are other (much safer and more realistic) methods to solve the problems brought forward by AI much better than this one. The law already has procedures in place for governing complex, sentient beings under control of others. Consider liability relating to the actions of employees which corporations are liable for, for example. Or the way that current insurance laws could cover liability issues far better than overhauling the entire concept of legal personhood.

Conclusion and My Views

The most important point from this post to summarise is this - personhood is hugely complex and so any debate on it is complex too, which doesn’t lend itself to Twitter spats or short media interviews. Couple this with the fact nations all use differing legal systems - the consequences of legal personhood in civil law systems will be drastically different to those with common law systems. As a result, when debates like this take place it’s often a case of people arguing apples to oranges, especially given this is an interdisciplinary debate. One of the best quotes from this comes from a very good recent paper on AI personhood which reads:

“[...] when discussing the challenges and opportunities brought about by RAI, both economic, legal, ethical, philosophical, and engineering considerations come into play, leading the debate to merge the methodological and analytical background of heterogeneous disciplines. Yet, economists, engineers, philosophers, and lawyers may use terms that have both a common, a-technical understanding and one which is peculiar of their own subject. Therefore, engineers or lawyers may speak of autonomy to denote different qualities than the ones that philosophers understand as associated with the said notion”

It’s already one of my favourite quotes, because it is at once true, centrally vital, and yet often forgotten. It’s one worth bearing in mind when any of us across the whole spectrum of AI personhood discuss the matter - either in academia/research or in public.

As for where I stand, when it comes to AI personhood I’m a firm consequentialist. I believe that we’re incredibly far from any ontological reason to provide AI with personhood, and in my opinion the only reason to give AI personhood is to solve specific legal problems. Are we there yet? I don’t think so. Our current legal remedies are for sure in need of some changes to deal with the novel problems ever-increasingly complex AI throws up, but I don’t think AI personhood is the best step. At least not yet. When it does come time to do so, it is my belief that an ad-hoc system would be best, as blanket giving of rights and duties to AI would create more problems than it would solve.

This is an area of debate we are going to see for some time, but it is vital that we in law connect broadly with those in computer science and sociology to ensure that when we make these decisions, they are made with a full and unbiased view of the reality of the situation - not media hype or crossed definitions.

There may be a time for AI personhood. But not today.