A (Very) Brief Introduction to 'AI Law'

Introduction

Law is a deep, complex, and highly compartmentalised field where specialism is the status quo. For this reason it is difficult for lawyers to get to grips with where the field of 'AI Law' is at present - never mind those in the technical, sociological, policy, or governance sides.

That doesn't mean we can't give a brief overview of the field as it stands, though! This blog post is a very brief introduction to AI Law (which isn't technically a field yet, as we'll discover later!) for those interested, with a few suggested readings at the end. This has been a very highly requested blog post by people at conferences, particularly tech-focused ones, and so I have obliged. Please note that none of this is legal advice in any way, and is only a roadmap for further exploration.

The Current AI Legal Landscape
Any discussion on AI law and regulation as a field must start with a disclaimer. Firstly, people in the legal field tend to hyper-specialise in one area, more so than many other disciplines. I have an adequate understanding of the basics of AI law across the Western world and the different approaches nations are taking. My specialism is in AI law in criminal justice within England & Wales, which means that much of my knowledge of the law relating to AI is based in this precise area. This post is, however, aimed at a global audience. I will do my best to be a generalist without losing nuance, but it is worth buying books or contacting specialists in your local region if you want more locally specific detail.

This also highlights another issue in the AI regulatory landscape - nations not only all have their own laws but also different legal systems. Additionally, each nation may even define what we mean by ‘artificial intelligence’ differently. Even within countries, for example England & Wales, investment regulations under Schedule 3 of The National Security and Investment Act 2021 (Notifiable Acquisition) (Specification of Qualifying Entities) Regulations 2021 define ‘artificial intelligence’ slightly differently than the UK’s House of Lords report ‘Technology Rules? The advent of new technologies in the justice system’. This may seem a tiny difference, but a couple of words make a significant difference in law. Think about it like having a comma in the wrong place in a line of code. Small difference, big effect. So even within nations, the differences can appear. As a result, in this article I will use ‘AI’ very broadly for generalisation’s sake.

With that in mind, I will briefly run through the current legal and regulatory landscape both in the UK and to an extent internationally.

What do we mean by ‘AI Law’?

AI Law can mean one of two things. Firstly, how various areas of the law affect (or are affected by) AI such as data rights, tort law, or criminal law. Secondly, how AI impacts basic tenets of law such as debates around legal personhood or machine responsibility. It’s easier to start with the former for this post, because it’s more ‘current’ and helps one understand the more future-centric and theoretical latter example.

Very few countries have a body of law specific to AI. A good way to explain this is the famous legal example ‘The Law of the Horse’. Our legal system has no unified, universal ‘horse law’ dictating the various legal issues surrounding horses. Instead, we have a myriad of laws which fall under different categories but affect horses. For example you can’t punch a horse in the face (well you can, but it’s illegal) because of the Animal Welfare Act 2006 and many other parts of criminal law. If that horse is a police horse, it is also protected under the Animal Welfare (Service Animals) Act 2019 and the penalty could be greater.  If you want to race horses for money then that could be governed by the Gambling Act 2005, but if you want to use a horse as a mode of transport it can be governed by Section 82 of the Road Traffic Act 1988. There are rules governing the sales of horses under contract law and rules governing what happens if your horse kicks someone under tort law (tort means a wrong suffered by someone, it’s often what someone means by ‘suing’ someone).

This ‘Law of the Horse’ example became famous during the 1990s when it was used by US Judge Frank H. Easterbrook to argue why special areas of law weren’t necessary for cyberspace, but as time wore on and the cyber environment became much more complex, specialist areas of law began to be introduced. We are in the early stages of a repeat of this with AI law. At the moment AI Law specialists are merely experts in how the various parts of different legal disciplines (or often one specific discipline, which for me is criminal law) interacts with AI. In the future, it is likely we will increasingly see AI-specific legislation and case law which allows much more universal legal coverage.

With that in mind, let us begin on the basics of the current regulatory landscape.

Human Rights Law

Possibly one of the most well known and active areas of AI law and regulation currently are the various major human rights laws which enshrine the rights of individuals in democratic nations. These include the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), the International Covenant on Economic, Social, and Cultural Rights (ICCPR), and the European Convention on Human Rights (ECHR).

One of the major benefits human rights law has over other types of law is that it is concrete, relatively stationary, and perhaps most vitally crosses borders as all signatory nations are signed up to the same agreement. It therefore handles AI issues rather well. The exact rights AIs most commonly interact with include (paraphrased due to different wording in different agreements) the right to privacy, right to fair trial, right to equality/non-discrimination, and the right to freedom of expression, among others.

This is generally one of my areas of specialism, and includes algorithms used to affect people’s rights not to be unfairly sentenced to prison, targeted for their vulnerability or protected characteristics, or manipulated into doing things against their wishes. The advantage of international laws such as human rights agreements is that no government or company, no matter how powerful, is beyond their reach and the laws are not open to change from regime to regime, or whatever party happens to be in power.

Data Protection Law

Naturally, AI and data are heavily intertwined and so data protection laws have a significant effect on the AI regulatory landscape. Possibly the biggest recent change was the EU’s General Data Protection Regulations (GDPR) which handed over much of the ‘power’ from companies to individuals, especially where personal data is concerned. GDPR’s definition of personal data is quite broad, meaning that it has forced AI developers to think heavily about what kind of data they use. This is especially true because any company, even based on another continent, that uses any data from inside the EU must comply with GDPR or risk fines. To this day some major companies in the USA, especially news outlets, for reasons I don’t fully understand (this one for example) will not allow people within the EU to access their website because the things they do with people’s data would break the rules, whereas people within the US have fewer rights when it comes to their data so they’re where the focus and profit is for those companies.

GDPR has been so successful that it has functioned as something of a reference point for other nations to emulate its success. Japan and South Korea have been particularly successful in their own versions based on GDPR.

Tort Law

Tort law is, in simple terms, the method by which people who accidentally harm others or their belongings have to make things right. This can be difficult when the tortfeasor (the one causing the damage) is an AI. Tort law didn’t initially adapt to AI well, because much of its basic concepts like the damage being reasonably foreseeable to the average person don’t apply to an AI. Another issue was that though in the USA tort law is harmonised and uniform, the EU has dozens of different jurisdictions and so in the opposite to data protection law, it is the USA who reaps the benefits of agility whilst the EU struggles to catch up.

At present, existing tort law in other areas is generally relied upon in contextual ways. For example, tort law covering drones covers autonomous drones, tort law covering driving and road traffic law covers autonomous cars, and tort law covering discrimination covers algorithmic bias.

This is one of the areas most subject to future change, as increasingly intelligent AI systems will put even more strain on many of the fundamental concepts of this area of law.

Copyright Law, Patent Law, and Trade Secrets

Combining two areas of law into one here for the sake of brevity (sorry lawyers!), there are a number of issues to do with AI regulation in copyright and patent law.

Firstly, there is significant (theoretical) disagreement about whether AI which creates things like art or ideas actually owns the ideas. In law, it has been decided thus far that it is the human who owns or operates the AI who is the ‘creator’ of the work.

Copyright laws differ around the world, and so nations have approached the issue to different lengths or not at all. The USA decided simply that computers can’t own copyright because they don’t have legal personhood (we’ll cover this in another post, when we discuss future alignment!). In the USA, part 17 U.S.C § 102(a) of the Copyright Act states that computers can help in the creation of copyrighted works, but Section 313.2 of the Compendium of U.S. Copyright Office Practices states that a copyrighted work must be created by a human being.

The United Kingdom took the same stand but under a different reasoning. Under the Copyright, Designs and Patents Act of 1988, Section 9(3) states that any work produced by a computer belongs to the person who arranged the environment for the creation to be undertaken, which essentially means whoever coded the AI or the user who told the machine to create the work then owns that work.

As AI has become more advanced, it has begun to discover or ‘invent’ things (being colloquial here, calm down CompSci crowd!) such as more efficient ways to do things or ways to improve on designs. This is an issue in patent law because ownership can be challenging.

In the USA, patents are issued easily and it is the job of the patent office to prove that a patent shouldn’t be made, rather than the other way around. That said, current regulations mean an identifiable person must be named as the inventor.

In the EU, patents must pass a series of checks to see if they’re realistic or feasible, and this is often a bar to AI patenting ‘its own’ ideas. Japan has taken a much more proactive approach, and asks people in their guidelines to mention when AI has made a contribution, and offer evidence to exactly how. This seems to be an effort at future-proofing.

Across most of North America and Europe, the inability to specifically copyright or patent an actual algorithm so that it cannot be copied means that AI development companies are extremely secretive about their AI. This has huge consequences, especially when it comes to using AI in policing or in court as evidence.

Criminal Law

Finally, criminal law is beginning to find itself wrangling with AI-related legal governance issues. In 2014, an AI in Switzerland used $100 worth of Bitcoin to go on an illegal shopping spree at the invitation of its creator. It purchased from the dark web a fake Hungarian passport, Ecstasy pills, fake jeans, a fake Sprite can used to smuggle items, a hat with a hidden camera in it and, in a rare show of good taste, a Lord of the Rings e-book collection.

Naturally this caught the attention of police, who confiscated the goods and the robot. After 3 months the robot was returned to the creator, along with the purchases (minus the drugs and fake passport). This may have been an experiment by the robot’s creators, but it was an interesting look at how the law reacts to AI committing what we recognise as crimes. A major, fundamental part of criminal law is 'mens rea' which translates to 'guilty mind' - essentially 'did this person intend to commit a crime?'. When it comes to AI, that can be a difficult question to ask, let alone answer.

In the past the law has dealt with differing levels of agency well. If a horse escapes a field, runs up to the nearest person and kicks them to death, then it is the owner of the horse or the owner of the enclosure (depending on circumstances of the case) who is liable - not the horse. An animal cannot be convicted of a crime, because they are generally considered under the control of humans (when domestic). When children commit crimes it’s somewhat in between. Justice systems generally don’t ‘punish’ the child per se depending on age, but will take action to remove that child as a threat to society via specialised youth prisons or psychiatric hospitals and attempt to correct long-term behaviour to avoid recidivism. It remains to be seen how our justice system will react to future AI 'committing' crimes, if you pardon another colloquialism. For now it is the creator who is liable, to a point of foreseeability, but this may change as AI becomes ever more advanced.

In criminal law, the actus reus (meaning ‘guilty act’, referring to the physical element of the crime) and the mens rea (latin for ‘guilty mind’, covering the psychological element or intent) are the twin pillars of justice. When it comes to AI, it is enormously difficult to deal with these elements and so lawyers across the globe tend to go for negligence on behalf of the creator, rather than deal with the AI itself.

As discussed in the Human Rights section, AI is also a focus of intense investigation and debate in the justice system in relation to police intelligence, criminal evidence, and the wider courts system (this is actually the topic of my law PhD) due to elements such as bias, fairness, and transparency. AI systems here are interacting with the law in ways that challenge the core tenets of what we consider justice. For example, the recent House of Lords Report 'Technology Rules? The advent of new technologies in the justice system' echoed a number of other voices in this field when it expressed concern that the use of AI could be eroding people's Right to Fair Trial, which you'll recognise from the human rights section earlier.

Conclusion

This is a spectacularly brief overview of what people commonly think of when they think about 'AI Law', and was focused entirely on the law as it stands today. I intend to publish a post on issues like legal personhood for AI, Artificial General Intelligence, and more in future.

If you're interested in further reading on this topic, you can't do better than Law and Artificial Intelligence by Woodrow Barfield and Ugo Pagallo, or Robot Rules by Jacob Turner. Both fantastic introductions to the field which are very readable for a non-legal reader. After that? It's big legal textbooks and papers for you!