Man vs Machine: The Algorithm Affecting Millions, Those Fighting Back, and why it matters for the long-term AI future

Preface note: Please consider signing up as a free member to this blog, as it helps me gauge engagement better, and gives you access to all articles. It's always free, and I do nothing with any of your data. Thank you! Additionally, this article supposes a brief knowledge of Longtermism which you can gain here.

Here in the UK, new laws that govern society come from two major sources. The first is a bill being passed through both Houses of Parliament and being granted Royal Assent to become law. The second is in court cases setting precedent via judicial decision-making - so essentially the outcomes of court cases dictate the probable outcomes of future court cases of equal or lower courts (though higher courts will take into consideration decisions too). Though they technically don’t so much create law as create how it is interpreted, for all intents and purposes we can consider this creating law in realistic terms. Unlike in some other common law nations such as the USA, the judiciary in the UK cannot strike down laws - that is a job for parliament.

So what does this have to do with the long-term AI future? It means that the rules that govern the use of AI come from multiple sources. Though organisations such as the Centre for the Governance of AI and the Legal Priorities Project do fantastic work on issues of writing/researching/advising on AI-specific legislature, and publications such as this House of Lords report have delved deep on AI risk in the legal system, there is currently little longtermist-inspired work being directed towards avoiding AI harms via the common law judiciary. That is, the influencing of long-term AI governance by contributing to court cases with favourable outcomes for wider AI Safety.

What kind of court cases? Well, this is where I and other longtermists often differ. Though many longtermists often focus on AI Safety from a viewpoint of preventing extinction via AGI/ASI or similar, I often focus on the prevention of suffering from widespread AI misuse by humans. I believe the two are intertwined, and that by preventing AI harm to humans now we shape and inform both law/policy and society to be more careful about how AI is developed. Winner winner, chicken dinner. Others in longtermist AI philosophy, however, believe that AI Safety is a purely technical endeavour and that we must prepare for AGI/ASI as its own focus area - leaving current AI harms for others to handle. Taking a neutral view, neither viewpoint is more valuable than the other and both are valuable avenues of exploration. Additionally, it is important to be transparent that because I come from a disadvantaged background which is seeing AI harms currently done to us I have a bias to viewing these kinds of issues as more important. I’m open about that bias, and honestly don’t plan to change it soon. Saving all of humanity is a big task. There’s nothing wrong with taking longtermist-inspired action to save your own community as a starting point!

In essence, my belief is that longtermism from a legal perspective should be applied to court cases - particularly human rights cases. This is because it is not only the logical thing to do for preventing future AI harms to vulnerable groups, but because it is also our moral imperative as legal scholars and practitioners to act in the best interests of society and of those for whom those rules govern, particularly those too vulnerable to stand up for themselves. Landmark human rights cases (particularly in technology) are effective ways to influence the long-term legal future for altruistic purposes. When it comes to AI, there is one case in particular which stands to provide this alteration to AI safety's path in one direction or the other.

The Algorithm Affecting Millions

In 2019, Privacy International acquired and read through the 995 page surveillance manual belonging to the UK Department of Work and Pensions (DWP). The DWP are the government body responsible for the welfare of the nation’s most vulnerable people - the disabled, the elderly, the poor, and the unemployed.This manual went into great detail about how the DWP surveil those demographics to combat fraudulent claims. And by surveil, it means surveil in every essence of the word. If you are suspected of fraud for any reason, even a silly reason (trust me, we’re about to delve into that!), your entire world can quickly become something out of The Truman Show.

The Regulation of Investigatory Powers Act 2000 (RIPA) and The Criminal Procedures and Investigation Act 1996 (CPIA)  gives the DWP the power to surveil not only anyone on benefits they identify as potentially fraudulent, but their neighbours, loved ones, and home or workplace too. They can get data from any business you are a customer of from gyms to banks to transport and can even monitor data from your emails or web browser. Many of these ‘investigations’ last for a year or more. If at any point they’re suspicious of you, they will cut off your money until they figure out whether their suspicion is genuine. Guilty until proven guilty. You must prove your innocence, not they your guilt. Bear in mind two facts here: the DWP does not permit your cash savings to exceed a certain (quite low) amount, and the majority of people relying on them are vulnerable in some way. This means the moment the investigation begins, your life is already destroyed. Innocent or otherwise. Some of the guide even covers how to use the media to destroy a person’s reputation once a case reaches court. If you think this sounds too insane to be true - take a read yourself. Bear in mind the DWP oversees 23 million people - over a third of the population of the UK. For North American readers, this is several million more people than the entire population of New York. The entire State, as well as the city. It’s two thirds of the population of Canada and, for the European readers,  more than the entire population of the Netherlands and Denmark put together. All under the DWP’s remit.

This 995-page document went into great detail across the whole range of methods of stripping a vulnerable person of their privacy. There was something that was missing though - how did they choose who to surveil? A potential answer could be found in the DWP’s Annual Report and Accounts 2017 - 2018, a public document on their finances. A single short paragraph on Page 63 shed some light:

We have developed cutting-edge artificial intelligence to crack down on organised criminal gangs committing large-scale benefit fraud. We carried out trials using algorithms that can identify different types of organised attacks on the welfare system. The algorithms reveal fake identity cloning techniques that are commonly used by fraudsters which are only detectable by intelligent computer programmes capable of searching for anomalies in billions of items of data. Artificial intelligence is enhancing our efforts to tackle benefit fraud

Any normal person would be thrilled to hear AI was being used to stop organised criminals in their tracks. Anyone with literally any experience in AI, crime, or both at all, however, will hear alarm bells ring. AI is not uncommon for recognising fraud - in fact it’s now a lynchpin of the industry. However, an AI operating in secret, with no regulation, and with the power of RIPA and CPIA behind it, which deals almost exclusively with vulnerable people? That’s what gets the spidey-senses tingling. The reason is that not only are these algorithmic decision-making systems prone to very harmful error on their own, but they are often repurposed into something else as a cost-saving measure. We saw this with COMPAS - a crime prediction algorithm designed for another purpose altogether, which was then fed crime data and used with disastrous results.

Privacy International sent a Freedom of Information (FOI) request to find out more…and boy did they find out more.

In January 2020, they received an initial answer. After a few months of back and forth, the DWP confirmed that their Data Matching system is called the General Matching Service (GMS), but that their data analysis AI within the Risk and Intelligence Service doesn’t have a name. They confirmed that their AI was developed in-house, and that’s about it. When it came to any other questions about the actual algorithm itself, the DWP refused to elaborate what categories of data were being used for matching and analysis, which criteria/indicators are used by this system to flag someone as likely to commit fraud, or even what kind of code was used for the AI.

In their refusal, the DWP cited Section 31(1)(a) of the Freedom of Information Act (FOIA), which covers information which, if released, would negatively impact the prevention and detection of fraud and crime. The DWP stated that it would ‘not be appropriate’ to make information available to the public which would ‘make the DWP more vulnerable to crime’.

They did, however, state that they are “exploring the potential use of techniques such as machine learning and network analysis. However, these are still in development.

The red flags are apparent already. In the financial report the AI is being hailed as a superpowerful anti-criminal device, but when pressed on human rights matters it’s merely data matching  in very early development (protip: 'data matching' is still an algorithm, my guy). In tech companies the marketing vs reality dichotomy is well known, but for government agencies dealing with vulnerable people it’s unacceptable. And dangerous.

Those who pay the price

I spoke recently at RightsCon, the world’s leading summit on human rights in the digital age, to expose details about how the DWP treats the vulnerable during my session ‘The Almighty Smiter - Secret Civil Algorithms and the Vulnerable who Pay the Price’. It’s too long to include here, but I’ll boil down a section here for two reasons. Firstly, to show exactly why the DWP (in particular) having a secret algorithm is a bad idea, but secondly to openly and transparently display some element of bias I may have with this topic in particular.

Though (mostly) recovered today, in 2018 I was severely disabled after an incident and relied entirely on the DWP for my welfare. It was during this time I discovered how our government treats disabled people, and this experience demonstrates why algorithms could be a new weapon against the vulnerable.

It took me many months to tackle the Universal Credit system (the generic benefits system) to get my first payment, but then more struggles began when requiring Personal Independent Payment (the special disability part). I needed to ‘prove’ my disability. No problem, only fair. My NHS nurse wrote an official medical report to the DWP and two of my carers sent in forms, but this wasn’t enough. The DWP demanded they send their own 'medical professional' to assess me. I know now this is their custom for all disability cases. This ‘qualified medical professional’ found me completely fit and well. Perfect health! I scored literally zero points on the disability assessment. My claims were denied. I appealed. The appeal was rejected. Supported by my nurse and my family, I took the DWP to a court tribunal and I won.

It was during this process I found out my DWP assessor, the qualified medical professional, was a physiotherapist, not a doctor, and had no other healthcare qualifications. I also discovered that these assessors have rejection quotas. Some formal, some institutional. There are financial incentives to complete as many as possible, and a Dispatches undercover investigation found that there are considerable pressures to decline people, with one worker saying on hidden camera he sometimes completed assessment forms before even meeting claimants.

The DWP didn’t bother to turn up to court on the day in my case. My nurse told me the DWP generally doesn't. Further research online shows that often the DWP won’t even reply to the Tribunal itself or furnish any information at all to the court process. In pushing to find out why, I was told that it’s a ‘numbers game’, and for the DWP the success of the long tribunals process was that most vulnerable people are unable to get through that process - meaning most who are denied benefits never make it to court in the first place because they're too sick, vulnerable, or impoverished to do so. So where do they go? Homelessness, destitution, sometimes death. To the DWP, the tribunals process designed to ensure fairness is just another tool to maximise the attrition rate of human beings for financial savings. That day I won because of my circle of support, but countless others lost by never getting there in the first place. For us it's a colossal amount of human suffering. For the DWP, it’s a positive number on a spreadsheet.

In February 2022, 80,000 of these declined decisions were overturned after the DWP admitted several ‘flaws’ following an undercover Channel 4 documentary expose. On an ongoing basis, a wrong decision is overturned every minute of every day. In fact, further exposes have shown disabled people being openly mocked by DWP assessors.

So how does this relate to AI? An organisation like this utilising algorithms, potentially ones not designed for this purpose, is bad enough. But utilising algorithms for this kind of work makes it much harder for exposes such as the ones formerly mentioned to uncover corruption and harm, and allows a useful scapegoat for harmful activity at large scale. It also enables further discriminatory behaviour against vulnerable people. There is evidence coming to light that the DWP is selecting disabled people for investigations, sanctions, and probes far more often than their demographic makeup would suggest. This coincides with the rise of their AI. Though research is still in its infant stages (there is considerable difficulty in getting funding for such matters), anecdotal evidence is mounting that disabled people are making up the bulk of those selected by the algorithm for seemingly random punishment. The DWP, however, have refused any transparency at all - despite the fact they’re a public, government organisation using AI to make decisions about vulnerable people’s lives and futures.

The transparency issue extends also to other concerns, for example Foxglove have expressed concern that the DWP’s algorithm “may involve claimant data being handed over to external organisations without telling them”.

Those fighting back

This is not the first time the DWP’s use of technology has raised issues. In 2020, the Court of Appeal ruled that the DWP’s failures in their computer systems which then impacted benefit recipients was unlawful. This time, their use of AI is making smaller waves. But not none.

The Greater Manchester Coalition of Disabled People (GMCDP) and human rights non-profit Foxglove have sent initial legal letters to the DWP requesting transparency before they take the DWP to court to force that transparency.

A UN report on the UK’s digital welfare state expressed significant concern that algorithms are highly likely to repeat and exacerbate biases, and can “fatally undermine the right to social protection for key groups and individuals”. Though this hasn't resulted in action, it does show that regulatory bodies are watching - but waiting to act.

Additionally, researchers (including myself) are now beginning to both investigate the types of harms this AI may be perpetrating and also to reverse engineer the algorithm from these harms to find out more about it. A local group of AI researchers and legal scholars who are united by their discussions of ‘Effective Altruism’ philosophy and who are members of their local EA group have teamed up with several outside experts and a range of charities in order to rein in this algorithm and prevent further harm.

Links to Longtermist Philosophy

So we have the general background to the DWP, their history with vulnerable people, their top secret “cutting-edge AI” and some information on why this case is about to begin - but what does any of this have to do with longtermism?

Though much of current longtermist thought focuses on existential risk, there’s a lot of published writing by respected authors on suffering risk. Natalie Cargill and Tyler M. John mention in their essay:

“Conscious experience is uniquely precious. In our universe, it is far scarcer than gold, and it is valuable beyond any price. And remarkably, what we do today may determine the entire future of conscious life. [...] And should our descendents survive, will we bequeath upon them a bleak, grey crypt under the thumb of a bloodless tyrant? Or will we leave a jubilant world rich in art, in culture, and in splendour?”  

- Natalie Cargill and Tyler M. John, Page 12, 'The Long View: Essays on Policy, Philanthropy, And the Long-Term Future'

Cargill later goes on to discuss the importance of acting in a way that benefits people everywhere, and her writing ties in very well with a legal mindset:

When we look at the world today, it is clear that we have a long way to go to secure the moral consideration that everyone deserves. It is incumbent on us to expand the moral circle of all human beings from its cruel and conceited beginnings to eventually include every sentient individual who lives today or will live in the future. [...] But history gives us cause for optimism about the ability of dedicated philanthropists and activists to expand humanity’s moral circle until all sentient individuals, present and future, are rightly considered. [...] The benefits of past efforts to expand the moral circle continue to cascade into the future, as we build on the work of our forebears to pursue a just world for marginalised communities. In this way, moral circle expansion is the ultimate movement-building project: today’s movements for prison reform, and for racial, gender, and species justice simply would not exist were it not for the efforts of early pioneers like John Howard”

- Natalie Cargill, Page 25, 'The Long View: Essays on Policy, Philanthropy, And the Long-Term Future'

Natalie goes on to describe, in what is a fantastic essay on expanding the moral circle which should be read by all interested in the topic, how today’s background conditions “clearly make it possible for dedicated philanthropists to have an enormous positive impact by working to expand humanity’s moral circle”, and gives several examples ranging from paragons of the civil rights movement, to anti-slavery groups, to self-funded causes by individuals who campaigned for issues such as prison reform, as well as examples from UK law such as those who helped pass the Sexual Offenses Act 1967 which legalised consentual homosexual acts. As part of their summary, they go on to say:

Given the astronomical scale of the future, and the opportunity to make the future as marvelous compared to the present as the present is compared to the past, philanthropists who want to make the greatest difference with their efforts should work to expand the moral circle to future generations

- Natalie Cargill, Page 35, 'The Long View: Essays on Policy, Philanthropy, And the Long-Term Future'

Though there are a variety of very valuable publications on the premise of reducing suffering to future generations and the value of small groups of suitably experienced, trained, and funded philanthropists to tackling these issues of long term suffering, I have quoted heavily from Cargill’s essays because they are one of the most relevant signposts for where we currently are with issues such as those discussed in this post.

For a more AI-centric lean, in her essay entitled ‘Ensuring the Safety of Artificial Intelligence’, Amanda Askell writes:

“These systems also come with risks, however. They could be used by bad actors to engage in large-scale misinformation or surveillance, to compromise critical systems, or to create realistic material for blackmail. They could pursue their goals in adverse ways that their users do not intend. They could cause alterations to the economy and to how information is produced that could lead to structural harms, for example from a loss of trust in reputable institutions or the entrenchment of existing social injustices”

- Amanda Askell, ‘Ensuring the Safety of Artificial Intelligence’

Though Askell is talking about what I sometimes call 'proper AI' here, the harms are very similar. If someone suddenly loses all of their income, their home, their possessions, and everything they've ever known - do they care how close to the Turing Test the decision-making software was?  In any case, caring about the consequences for vulnerable people is somewhat rare in AI Safety. Writing from authors such as Cargill, John, and Askell show that longtermism isn’t all about protecting the interests of the privileged, wealthy, and powerful as has been (sometimes validly, sometimes not) criticised by some in media and on Twitter.

It is my belief that Longtermism as a philosophy, especially in the legal field, teaches us that it is a moral imperative to make tomorrow’s world a better place for everyone, but particularly for those who are unable or less able to positively influence that future themselves. It is increasingly apparent when it comes to AI Safety that though research labs offer extremely valuable work, many great strides in AI Safety will be made on the streets and on courtroom floors too. By not taking part in these great battles between Human Rights and Algorithmic Opacity, we not only fail to undertake our moral duty as legal scholars and researchers, but also allow opportunities for increased AI Safety to pass us by.

The DWP being forced to make their AI (or algorithms, or whatever you want to define it as) transparent would set a legal precedent that means other organisations would either be forced to reveal their own algorithms if it reached court, or (more likely) other government organisations would see this as a sign they need to improve transparency themselves to avoid similar draggings through the courts.

I strive to be open about my biases. I believe in this case and its importance because I’m the kind of person who it affects. I see its harms every day in my community, I meet people at the food bank who are victims, it’s personal to me. I want to tackle this issue because I know it and it's close to me, not just because it's an effective longtermist cause.

But this brewing case is still an effective longtermist cause. Even if you’re operating by cold logic, you would be hard pressed to find an AI Safety negative of increased AI transparency in government-developed algorithms. Increased transparency not only keeps vulnerable people safer, and the wider general public better informed about what is being done by their government to the public, but it also helps people keep tabs on the development of AI systems. If you’re someone who is solely focused on AGI/ASI, there is still benefit to being able to better monitor which tools are being developed in the AI space. After all, who has better access to training data and funding than the government? If one of AI's biggest development bodies suddenly has to tell you what it's building and how - can that be anything other than a win for the AI Safety lobby?

Tomorrow and all the days after

So what happens now? As of these letters going on the page, Foxglove and the GMCDP are waiting for a final response from the DWP. The last step before court action starts. I have to tell you - David and Goliath doesn't really cut it. Foxglove and the GMCDP raised £15,550 for the entire legal case. The DWP has a budget of £9.3 billion. That's £9,300,000,000 by the way. Funding is going to be a major issue in the coming days, but there's always the chance of a major donor coming forward, or more fundraising.

In the meantime, a coalition has formed between several Effective Altruism researchers, local charities / food banks, and outside subject matter experts across computer science and law to attempt to find out more information about this algorithm and the harm it is doing to millions of people. Effective Altruism funding bodies, as well as other similar grantmakers, rejected any funding whatsoever and so other funding streams are being pursued (we're literally talking donations of food and clothing here), but even without funding this research is likely to begin publishing findings by Autumn 2022. Findings which will hopefully shed more light on this algorithmic harm, and potentially protect vulnerable people.

If it all goes to court…who knows? Either the human rights lobby wins and AI is forced to become more transparent all across the UK and algorithmic harms to roughly 23 million people are prevented, or the DWP win and their black curtain across AI continues to hold - theirs and every other powerful organisation’s black curtains. If the DWP are forced to open up transparency, and it is discovered that the algorithm is in fact discriminatory, then a whole new kettle of fish opens up. Just look at this almost identical situation in the Netherlands where once the dam broke, sweeping changes in how AI is deployed followed.

It is clear that this case is a major landmark (in the UK) for longtermist philosophy and its theory on preventing harm and carrying out altruistic work. A major AI system is harming double-digit millions of people. You couldn't write a better opportunity to not only reduce current suffering but prevent future suffering in one. The struggle is in moving people from theory to action. Many enjoy writing about which work best helps humanity, but then seem extremely reluctant to mobilise for action when the time arrives to do that work.

If we as longtermists are to be anything more than observers of our own ideas, we must strive to live those ideals. And there’s no time like the present. This case could be the biggest opportunity yet to leverage very little funding and effort for a major altruistic impact on the nation's most vulnerable - and to prevent suffering for multiple future generations by setting welfare standards. The question is whether or not we will seize it.

Interested in doing more?

Donate to the GMCDP/Foxglove legal fund here

Contact the GMCDP here

Read a variety of high quality longtermist literature here

It is very helpful to sign up to this blog as a member, as it means future articles are delivered to your inbox, and it helps me gauge engagement better!

Video transcription coming soon