Skip to the content

RUSI highlights risks for data analytics in policing

17/09/19

Mark Say Managing Editor

Get UKAuthority News

Share

A thinktank report has highlighted significant risks around bias in the police use of data analytics.

The Royal United Services Institute (RUSI) has published Data Analytics and Algorithmic Bias in Policing with a warning that the absence of consistent guidelines for the use of automation and algorithms may be leading to discrimination by police forces.

It acknowledges benefits from the use of the technology in some areas, notably in identifying likely locations for future crime through predictive mapping. But it says the evidence is less clear when it comes to individual risk assessment tools, largely due to a lack of research on the algorithms in use.

A few police forces in England and Wales have developed machine learning algorithms to assess the risk of known offenders re-offending and to feed into the priorities of their operations. But it has come in for criticism from some of the police officers interviewed for the report, saying it is uncoordinated and delivered to different standards in different settings.

This leads to the potential for different types of bias, often reflecting the mindset in police forces – such as in young black men being more likely to be stopped and searched than young white men.

Possible deviation

There can also be biases in a police officer deciding whether to follow or deviate from the insights provided by the algorithms. One factor is the tendency to over-rely on the automated outputs and discount information, no matter how strong it may be.

The interviews pointed to a desire for clearer national guidance and leadership in how to use data analytics, and a widespread recognition of the need for legality, consistency, scientific validity and oversight.

At the same time, there has to be a recognition that “policing is about dealing with complexity, ambiguity and inconsistency”.

The report also says that developing fairness in the algorithms is not just about the data; it needs careful consideration of the wider operational, organisational and legal context, as well as the overall decision-making process.

RUSI highlights three important implications. One is around the allocation of resources, with police forces needing to consider how algorithmic bias may affect their decisions to police certain areas more heavily.

Another focuses on legal claims, with the possibility of individuals with ‘negative’ scores filing discrimination claims against the police.

Third is the danger of over-reliance on analytical tools, undermining the discretion of police officers and causing them to disregard other relevant factors.

Need for code

As a response, the report calls for a new code of practice for algorithmic tools in policing, setting up a standard process for model design, development, trialling, and deployment, along with ongoing monitoring and evaluation.

“It should provide clear operationally relevant guidelines and complement existing authorised professional practice and other guidance in a tech-agnostic way,” it says, adding that it “should ensure sufficient attention is paid to meeting legal and ethical requirements throughout all stages of the product lifecycle”.

This should run from the inception of a project through to the deployment of a tool.

RUSI plans to publish a second paper early next year including recommendations for the code of practice being drawn up by the Centre for Data Ethics and Innovation.

Image: Matty Ring, CC BY 2.0 through flickr

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.