Skip to the content

Durham police to use AI for custody decisions

10/05/17

Share

HART system to assess risk of new offending based on data from Red Sigma

Durham Constabulary is planning to begin using an artificial intelligence (AI) system named the Harm Assessment Risk Tool (HART) for making decisions on whether suspects should be kept in custody.

A spokesperson for the force told UKAuthority that it is expected to go live with the system within the next three months.

HART has been developed by Durham Constabulary working with a team led by Dr Geoffrey Barnes from the Institute of Criminology at the University of Cambridge.

It will use data taken from the force's custody system and classify suspects as being at low, medium or high risk of offending if let loose.

According to a report from the BBC, HART was first tested in 2013, then its results monitored against reoffending cases over the next two years. It showed an accuracy of 98% for predicting suspects were low risk and 88% for indicating they were high risk.

Sheena Urwin, head of criminal justice for the force, said those with no offending history would be less likely to be classed as high risk; but this could change if they were arrested for a serious crime.

Urwin stressed that there is no intention to replace decision making by custody sergeants. "This is a decision support tool, not a decision maker," she told the Trust, Risk, Information and the Law conference in Winchester this month. "It provides consistent and transparent decision support."

Results of the system trial will be published in a peer reviewed journal, she said. 

Speaking to UKAuthority days after the conference, Urwin said the model used in the AI draws on 34 variables, 29 of which are around prior offending, along with postcode, gender and age. They do not include ethnic background, the use of which has stirred up worries over apparent bias in the application of AI in the US, and the postcode includes only the first four characters to prevent identification of the street in which the subject lives.

The system goes through 509 decision trees and can produce 4.2 million decision points, drawing on five years of data.

Urwin added that it will take two years to get a completely accurate picture of the system's effectiveness, although interim results might be produced after 12 months. She and the Cambridge University team plan to make the results openly available.

Controversy

The use of AI to classify offenders by risk is well established - and controversial - in the US. A study published last year of decisions made by one such system, Correctional Offender Management for Alternative Sanctions (COMPAS), found a worrying apparent bias. An examination of the cases of more than 7,000 people arrested in Broward County, which covers Fort Lauderdale, Florida, found that in assessments that turned out to be mistaken, black people were almost twice as likely as white people to be falsely labelled at high risk. White people were more likely to be mislabelled as low risk.

In a 37-page rebuttal, the company behind the COMPAS system, Northpointe, denied any bias. It says that the apparent racial imbalance is a natural consequence of using unbiased scoring rules for groups that happen to have different distributions of scores. 

But a British expert on crime data, Professor Allan Brimicombe of the University of East London, told the Winchester conference that of the data items used to score risk, 65-70% "would strongly correlate with race in America".

Additional material by Michael Cross.

This article was amended on 12 May following the provision of further information.

If you are interested in artificial intelligence and want to find out more, register here for the free to the public sector, UKAuthority Rise of the Bots event on 20 June 2017.

Image by Victor, CC BY 2.0 through flickr

 

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.