Skip to the content

Lords committee calls for better oversight of AI in justice system

30/03/22

Mark Say Managing Editor

Get UKAuthority News

Share

Baroness Hamwee
Image source: parliament.uk, CC BY 3.0

A House of Lords committee has raised an alert about the use of artificial intelligence tools in the justice system, saying it has serious implications for human rights and civil liberties.

The Justice and Home Affairs Committee has sounded the warning in a new report, Technology rules? The advent of new technology in the justice system, in which it calls for the establishment of a mandatory register of algorithms used in relevant tools.

It says that without a register it is virtually impossible to find out where and how specific algorithms are used, or for Parliament, the media, academia, and people subject to their use to scrutinise and challenge them.

The committee also calls for a duty of candour on the police so that there is full transparency. It says that AI can have huge impacts on people’s lives, particularly those in marginalised communities, and without transparency there can be no scrutiny and accountability when things go wrong.

In addition, it recommends that a national body should be established to set strict scientific, validity, and quality standards and to certify new technology solutions against those standards. No tool should be introduced without receiving certification first, allowing police forces to procure the technological solutions of their choice among those ‘kitemarked’.

This comes from its concern that most public bodies lack the expertise and resources to carry out evaluations, and procurement guidelines do not address their needs.

Rapid developments

The report highlights that, while facial recognition is the best known, other technologies are in use and more are being introduced. Meanwhile, controls have not kept up.

It acknowledges the benefits such as preventing crime, increasing efficiency, and generating new insights for the criminal justice system, but says it is concerning that there is no mandatory training for the users of AI technologies. There are also risks of exacerbating discrimination, with serious concerns about human bias contained in original data being embedded in algorithmic outcomes.

Subsequently, the committee is clear that ultimately decisions should always be made by humans.

Further problems are caused by more than 30 public bodies, initiatives, and programmes playing a role in the governance of new technologies. The committee says the system needs urgent streamlining and that reforms to governance should be supported by a strong legal framework.

Safeguards needed

Baroness Hamwee (pictured), chair of the Justice and Home Affairs Committee, said: “Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is the computer always right? It was different technology, but look at what happened to hundreds of Post Office managers.

“Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A kitemark to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.

“We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision makers, knowing how to question the tools they are using and how to challenge their outcome.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.