A report on the ethics of live facial recognition technology in policing has warned that more needs to be done to identify the scope for bias and errors, and that there is a need for a legislative framework for its use.
Compiled by the Facial Recognition Working Group of the Biometrics and Forensics Ethics Group, it has come as an interim response to recent field trials of live facial recognition (LFR) by South Wales Police and the Metropolitan Police service, and concerns about its effectiveness.
It acknowledges these with a reference to worries over the scope for error, bias or inaccuracies in the algorithms, saying that operators could wrongly defer to all of its decisions or, when it generates too many false matches, begin to ignore or override all of them.
There are also concerns over an intrinsic racial or gender bias in the systems and questions around the boundaries between trials and operational use of LFR.
In response, it says there is a need to differentiate errors and biases inherent in the technology from those that come with the human operator’s decision on how to use it. It also notes the lack of oversight and governance of its use and says, pending the development of a legislative framework, it should follow ethical principles previously drafted by the Biometrics and Forensics Ethics Group.
These include that LFR has to be used in an even handed way; that automated biometric recognition systems should be open to scrutiny and effective oversight; that the use of the technology should meet a proportionality requirement; and that there should be public consultation around its use.
The Biometrics and Forensics Ethics Group was set up in 2017 to look at the issues around all forensic identification techniques.
Image by Sheila Scarborough, CC BY 2.0 through flickr