January 19, 2024

We shouldn’t trust AI in courts - Here’s why

author_bio
Cressida Anness Lorenz in London, United Kingdom

Article link copied.

slide image

Illustration: iStock

The quick development of Artificial Intelligence (AI) has opened many avenues – image generation, multiple useful softwares, with the ability to create pages worth of written work in just a few seconds.

But, as AI grows to be more widespread the question has arisen – do we trust AI enough to implement it in everyday professions, especially in legal work?

AI has become a more active topic of discussion since the release of softwares such as ChatGPT. But, AI has already been integrated into the legal system far earlier than one would expect.

The UK police force has been using a system called HART (Harm Assessment Risk Tool) since 2017, with the data it was trained on originating from as early as 2008. HART’s algorithm is able to calculate a risk for reoffending by analysing a database of people arrested previously.

Even though these systems have been in place for a reasonable amount of time and provide valuable services, it does not mean that its use can always be justified.

As reported by Fair Trials, the data the software in question was trained on had a racial bias and painted an inaccurate prediction of one’s risk of re-offending. An estimate of more than 12,000 people have already been profiled by this flawed system.

This is not the only AI that is being used by authorities.

In 2016, the case of Eric Loomis in Wisconsin showed how AI can contribute to a judge’s final verdict.

Accused of five criminal counts related to a drive-by shooting, Loomis was convicted of six years in jail. His case was assisted by the Northpointe Inc’s product software technology named COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) which was designed to make data-centric and reliable suggestions in court.

Similar to HART’s system, COMPAS, which is mainly used in America, analyses data on a specific defendant, whether it be their age, gender or socio-economic background, and uses this to give each person a score. This score defines how likely the COMPAS algorithm believes they will re-offend.

One main problem remains, it is unknown exactly how these software work which provides a huge disadvantage to the defendants. Loomis’ defence for example was unable to refute the AI suggestions because the data processed to calculate its final decision is a mystery.

Within the same year, an investigation by ProPublica was conducted with a sample of 10,000 defendants in Florida whose data and risk analysis had been performed by COMPAS. It found that African American defendants were more likely to receive a false higher-risk score of re-offending, and for white defendants the opposite was true.

The investigation found that among the 7,000 people identified by the algorithm as likely to re-commit a violent crime only 20% did so. A staggeringly low number for its widespread accepted use across the US.

Although racial bias did not apply to Loomis’ situation, the fact that there are notable flaws found in the system indicates that COMPAS is unsuitable to be used in making such decisions.

Generally speaking, these AI profiling machines remain legally unregulated. I believe it is hypocritical to make use of such technology that has not been properly regulated by law. By doing this, we may open ourselves up to infringing on people’s right to a fair trial.

In Loomis’ case, the judge stated that the same decision would have been made even if he had not taken into account COMPAS’ recommendation, but this is not true for all defendants.
His inability to defend himself against the mysterious algorithm itself can set a dangerous precedent, one that may result in miscarriages of justice.

All things considered, COMPAS and HART inherits the racial bias already prevalent in our justice system.

Despite the wishes of prominent figures, such as Lord Reed of the UK Supreme Court who supports its development to “enhance our justice system,” AI is simply too volatile to be used in the court system as of yet.

The risk of bias, racial or otherwise, renders the systems too precarious to use at this current moment. It is unwise to use a technology not even restricted or regulated by law in our legal affairs.

Written by:

author_bio

Cressida Anness Lorenz

International Affairs editor

London, United Kingdom

Hailing from Islington, London, Cressida was born in 2006 and has been interested in creative writing and journalism from a young age. She joined Harbingers’ Magazine as one of the winners of the Harbinger Prize 2023, and in 2024 became the International Affairs editor for the magazine.

An abstract thinker, her main areas of focus are varied and philosophical in nature. In her spare time she enjoys involving herself in the art world, attending numerous practical art groups. This involvement in art has led to a curiosity in perspective and how it can be used as a lens to see the world in many different ways.

She enjoys both reading and writing which are her main pastimes and aims to study law.

Edited by:

author_bio

Sofiya Tkachenko

former Editor-in-chief

Kyiv, Ukraine | Vienna, Austria

opinion

Create an account to continue reading

A free account will allow you to bookmark your favourite articles and submit an entry to the Harbinger Prize 2024.

You can also sign up for the Harbingers’ Weekly Brief newsletter.

Login/Register