Skip to the content

ICO unveils AI data protection risk toolkit

04/05/22

Mark Say Managing Editor

Get UKAuthority News

Share

AI head
Image source: istock.com/Monstij

The Information Commissioner’s Office has launched an AI and data protection toolkit as part of the effort to spread best practice in the use of artificial intelligence.

It has made the toolkit available as an Excel file on its website, which can be downloaded and edited to help the user organisation through the various stages.

Speaking at the launch event, staged with IT industry association techUK, Ahmed Razek, the ICO’s principal technology adviser for AI, said it regards the public sector as a key stakeholder in promoting the use of the resource.

Senior policy officer Alister Pearson said it has been developed to help organisations comply with data protection regulations and win public trust in the use of AI.

“We absolutely support the innovative use of AI and recognise there are enormous benefits in the when it is done well,” he said. “The aim of this toolkit is not to slow you down but to speed up and innovate and obtain these benefits in a way that achieves compliance without the risk.”

Risks and controls

The toolkit takes the user through a series of risk areas during the different stages of a project – business requirements and design, data acquisition and preparation, training and testing, and deployment and monitoring – with a space to summarise the assessment of the risk, then providing guidance on the controls to be carried out followed by a series of practical steps to reduce the risk.

This is followed by space to record the steps taken by the organisation, the owner of the control, current status and the completion date.

A number of risk areas have been identified, most of which are in line with principles in the UK General Data Protection Regulation, taking in fairness, transparency, security, personalisation, storage limitation, data immunisation, lawfulness, accountability, purpose limitation and meaningful human review.

The latter involves ensuring a decision made by AI is explainable and can be checked for any bias.

The practical steps to address each type of risk are divided into what must be done to meet legal requirements, what should be done as part of best practice, and what could be done for optional good practice.

Key stakeholder

Razek highlighted the importance of the toolkit to implementations of AI in public services.

“Public services are a key stakeholder we are trying to hit in promoting the use of the tool, because some uses of AI in public bodies are particularly high risk as they affect millions of people in some cases, and sometimes vulnerable people,” he said.

“These are areas in which we are trying to socialise the use of the toolkit.”

Stephen Bonner, the organisation’s executive director, regulatory futures, said some public sector bodies had been among those providing feedback on the development of v1.0 of the toolkit from the alpha and beta versions.

They added that there is no firm schedule for any further iterations of the toolkit, and that the ICO’s next step will be to work with other bodies on developing case studies of how it is used to promote best practice.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.