Skip to the content

ICO outlines steps to avoid bias in AI

25/02/20

Mark Say Managing Editor

Get UKAuthority News

Share

The Information Commissioner’s Office (ICO) has outlined measures including a data governance framework, monitoring for algorithmic fairness and being ready to add or remove data on different groups in order to eliminate bias in the use of artificial intelligence.

The recommendation is included in new draft guidance on the AI auditing framework for which the ICO has begun a consultation. The guidance contains advice for understanding data protection law in relation to AI and recommendations for organisational and technical measures to mitigate the risks.

It also provides a methodology to audit AI applications and ensure they process personal data fairly.

The move has come in the light of growing concerns over biases within AI algorithms that are determined by the perceptions of those developing the programmes.

Preventative measures

The guidance investigates the factors behind possible bias in AI, and sets out a number of steps aimed at mitigating the risk. Some of these are preventative, including the creation of a data governance framework ensuring how personal data should be used, and ensuring that AI developers have completed training and competency assessments so they can identify and deal with any discrimination in the systems.

It also proposes documenting access management controls and a segregation of duties in the development and deployment of systems, a thorough assessment of the risk of discrimination, and a documented process of pre-implementation testing.

Other measures are focused on detection, including regular monitoring, documenting levels of approval and attestation of training and test data prior to use, and regularly reviewing performance against the most recent data.

The document also points to corrective measures that include being prepared to add or remove data about under- or over-represented groups, and being ready to retrain model designers.

Other elements of the guidance include ensuring lawfulness, fairness and transparency, assessing security and data minimisation, and enabling individual rights in AI systems.

The ICO said it is looking for feedback from technology specialists, data protection officers, general counsel and risk managers through an online survey.

“This is the first piece of guidance published by the ICO that has a broad focus on the management of several different risks arising from AI systems as well as governance and accountability measures,” the organisation said.

“It is essential for the guidance to be both conceptually sound and applicable to real life situations as it will shape how the ICO will regulate in this space. This is why feedback from those developing and implementing these systems is essential.”

The consultation is set to run until 1 April.

Image via www.vpnsrus.com CC BY 2.0

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.