The LaMDA ‘Sentient AI’ Debate - Future consequences for Law

The LaMDA ‘Sentient AI’ Debate - Future consequences for Law

One of the few downsides of having degrees and job experience in both AI and Law is that at any given time, the media is hyping up and misrepresenting at least one non-story between them, using a mixture of journalistic creativity and the ignorance of the general public on that particular matter. To AI’s credit, it’s usually Law that’s the victim here. Newspapers love to declare that lefty lawyers are stopping deportation planes from taking off (they’re not), that people get arrested for self-defence and it’s a symptom of a broken justice system (they don’t and it’s not), or that one case or another is an example that the judiciary are ‘soft on crime’ (which ignores the existence of sentencing guidelines).

So imagine my shock when it was AI’s turn on the big, public, media screen!

My, oh my. What a headline.

So what's going on?

There is currently a pretty big furore about an AI at Google which has ‘become sentient’. An engineer working on a large language model (LLM), called LaMDA, has released transcripts which he claims shows that this AI is ‘sentient’ in a similar way to a ‘7 or 8 year old child’. He was then suspended from Google.

First let’s get the employee’s suspension out of the way. There have been claims on social media that this is Google trying to silence or otherwise shut down news of their machine gaining sentience. Unfortunately, the truth is probably a lot less exciting. Unless you’re an employment lawyer.

Every time I’ve ever been contracted to create software for a company, I’ve had to sign an NDA. Non-disclosure agreements are par for the course when it comes to employment, but especially for research or development roles in major companies. This is because leaking cutting-edge information by even talking about it can harm the company both financially and in terms of reputation. This is likely what’s happened here. By publishing these transcripts, conversations between himself and LaMDA, he has likely violated his NDA. This is speculation, but reliable speculation.

Further reasons are potentially the significant amount of reputational damage currently being done to Google. If you were a veterinarian with an influential R&D bioscience company and began publicly claiming that the lab rats were trying to take over the world, or if you were an astrophysicist at NASA and began claiming that Martians were sending messages to you in your dreams, you’d likely be suspended too. This is more likely than Google trying to ‘bury the truth’.

Do the sentience claims hold weight?

First of all, it’s worth probing past the media hype and ask ourselves - have there been any sentient claims? It’s a good idea to note at this point that the engineer has not made any actual direct claims, at least initially, about sentience. He more raised it as a potential concern. Some direct quotes from him in his initial shared Google Doc, quoted elsewhere by media:

“But is it sentient? We can’t answer that question definitively at this point, but it’s a question to take seriously.”
“On some things yes on others no. It's been incredibly consistent over the past six months on what it claims its rights are.”
“I'd honestly rather not speak for others on this. All I'll say is that many people in many different roles and at different levels within the company have expressed support.”

It looks to me like he had more of a ‘feeling’ about sentience and was seeking appropriate channels with which to raise the issue.

This is backed up by a later tweet which stated:

“People keep asking me to back up the reason I think LaMDA is sentient.  There is no scientific framework in which to make those determinations and Google wouldn't let us build one.  My opinions about LaMDA's personhood and sentience are based on my religious beliefs.”

For the record, I think major R&D departments *should* have methods by which to report concerns surrounding concerns of sentience - even just for the sake of employees. In this case, it looks like the engineer took to the public when he couldn’t find what he felt were suitable ways of making his concerns known.

In this instance, I find it extremely unlikely that the chatbot has achieved sentience. It is very easy for a chat transcript to give the illusion of sentience, and in fact this is easily doable with just a few hours training a simple chatbot. There’s a difference though between appearing sentient and being sentient, and there just isn’t the technical capability there for any degree of real sentience to appear - not in a chatbot. It’s just not in how they’re made.

What then, is the motive?

Ultimately, for our purposes, it doesn’t matter. Perhaps the engineer made a mistake, perhaps he’s unwell, perhaps he wants to financially profiteer from the claim via interview fees and book deals, or maybe there’s something else going on altogether - office politics or whatnot. Or, alternatively, maybe he’s right (although I put those odds at around 100 million to 1). Whatever the reasoning, this does raise some interesting questions for us in the legal field. Here is a quick roundup.

Should organisations and individuals have a duty to report sentience concerns?

There are many laws regulating the duty to report things which happen in the workplace, and which dictate how seriously an employer should take them and how they should be recorded. There are the The Reporting of Injuries, Diseases and Dangerous Occurrences Regulations 2013 which report a variety of concerning incidents and near-misses. This not only functions to help employers keep track of danger and be transparent with the government about what they’re doing about it - it also helps to track the prevalence of incidents. Are they rising or falling? Are they occurring more in a particular industry? Or geographical area? How are they being mitigated? Where these incidents are low, what are they doing differently to where they are high? All of these are important questions for incident, accident, and near-miss reporting.

We have the Health and Safety at Work, etc Act 1974 and the Management of Health and Safety at Work Regulations 1999 to dictate what needs reporting and where, but there is very little relating to more philosophical concerns.

We know sentience is possible because, despite my girlfriend’s claims, I am sentient. As are you, probably, unless you’re a chatbot using this blog for training data. Therefore it stands to reason that because sentience is possible, one day we may achieve the creation of it. A good way to start a fistfight at an AI conference is asking how close we think we are to this creation, but in my own opinion we’re centuries away yet. However, it may also be a good idea to have reporting requirements for two reasons:

  1. People who want to report this have an option to do so where they are content it is being recorded outside of the organisation, reducing frustration (and later, real risk)
  2. The ability to track, centrally and nationally, incidents where people feel for whatever reason that a piece of software is exhibiting sentient behaviour would create useful technical and sociological data

Obviously this is a spitball idea, and details such as obligation and transparency would need to be taken into account, but it’s one of the things we can consider as claims of sentience become more common - unfounded or not. This leads us to another consideration.

At what point does an AI become a person?


The idea of legal personhood for AI is not necessarily a bad one. We already have legal personhood for corporations, organisations, nations, rivers and trees. Legal personhood conveys a lot of useful legal options, which are important in a contextual way. There’s a difference between legal personhood and a person, though. A nation, river, or tree cannot be considered a person. This is because most law considers itself applied to a ‘person’ because ‘person’ has only ever meant human beings (though not always all human beings).

The law has struggled with this before. In 1994, an odd case came before the courts. Recent technological advances brought into question what is a ‘person’ or ‘not a person’. This was nothing to do with AI, however. It was a complex, and quite upsetting, case of murder (or not murder, depending on the legal decision).

In 1994 a man stabbed his pregnant girlfriend who subsequently, as a result of the stabbing, gave birth prematurely to a baby. The baby died 121 days after being born, primarily as a result of the premature birth. It is worth highlighting that the baby did not die of stabbing injuries, but of the premature birth that was a *result* of the stabbing injuries. This was unique because until very recently, there wasn’t the medical technology to keep infants alive for that long after such a birth. This raised the question - did the father murder the child? The Attorney General referred to the House of Lords for an answer.

The ultimate answer was no. The father was convicted of manslaughter, for various reasons. What is important is that the House of Lords, in their consideration, decided on what constitutes a person in A-G's Ref (No 3 of 1994). The reason boiled down to (in an oversimplified way) the fact that *at the time of the attack* the baby was not considered in law as its own person, because it was attached to the mother via the umbilical cord and could not subsist on its own.

We’ve a long history of deciding whether or not sentient beings are truly ‘people’. Consider slavery, which is a dark example, but sometimes a helpful one because it’s a good demonstration about how legal changes in one area can impact another. When the Slavery Abolition Act passed in imperial Canada, it freed very few slaves in Canada outright due to the language and technicalities. However, it had a much larger effect on slaves in the United States of America. The United States lagged far behind other nations both in the treatment and acceptance of slavery, which meant that crossing the border into a now anti-slave Canada helped thousands of escaped slaves evade captors.

So how’s this relevant? Well, when we consider whether a machine is ‘sentient’ we’re going to have tough questions to answer. It’ll be the biggest challenge of legal identity until we meet another species in space, so it’s worth consideration at least in the purely philosophical sense. Legal decisions regarding how to treat ‘sentient machines’ are at the very most generous measurements decades away, but more realistically centuries. However, is it time to begin writing laws with those potential changes in mind? When creating laws, should we leave contextual scope for application to non-human, non-animal sentience? They are questions worth pondering.

Sentience Fraud

It’s worth saying this clearly so I don’t harm anyone’s reputation and, perhaps more importantly, don’t get sued. There is no evidence that the engineer in question is making these claims fraudulently. However, if people do make claims that they think one piece of software or another is ‘sentient’, and are able to then capitalise on that for profit (for example via paid media interviews, or increased talk delivery opportunities), can we consider this as fraud? I would expect that current methods of enforcement under the Fraud Act 2006 would be sufficient, but the highly opinion-led, untestable nature of ‘sentience’ could give some struggle to this. As chatbots become more able to simulate sentience, could the usage of them in a manner which implies sentience be considered fraud? Both examples would offer complexity to ‘false or misleading’ as stated in the law.

Conclusion

This current media frenzy is worth watching both for fans of AI and fans of AI Law, but is definitely not worth getting too worked up about. The chances of sentience are vanishingly small, and it is far more likely to be the product of human fallibility - whether that be suggestibility, greed, corruption, or even illness. For now, legal questions regarding ‘sentience’ and whether AI could ever be a ‘person’ are purely theoretical and philosophical. Legal personhood may be another issue altogether, however…