Skip to the content

Follow us @UKAuthority

CDEI sets out six steps towards AI assurance

08/12/21

Mark Say Managing Editor

Share

The Centre for Data Ethics and Innovation (CDEI) has set out six steps to develop an AI assurance ecosystem for the UK.

AI icon over computer screen

It has published a roadmap for the effort, following a commitment in the National AI Strategy to build an ecosystem of tools and services to identify and mitigate the risks in using the technology.

CDEI said it addresses one of the biggest issues in AI governance identified by international organisations including the Global Partnership on AI and the World Economic Forum.

The first step is to generate demand for reliable assurance across the supply chain by improving the understanding of risks and the accountabilities for mitigating them. The document says this demand is beginning to emerge in response to public concern over the potential risks.

Second is to build a competitive AI assurance market with a range of effective tools and services. This should give organisations choices in responding to specific risks in using different applications.

Third is to develop standards that provide a common language for AI assurance. CDEI says there are many ongoing efforts towards standardisation and there is a need some consolidation and a sharper focus.

It adds that the higher priority areas are AI management systems and risk management standards, to establish good practice that organisations can be certified against.

Fourth is to build an accountable AI assurance profession to ensure the services are trustworthy and high quality. This could come through accreditation and chartered professions, although the roadmap says it is still too early to tell which of these models would be right for the purpose.

Fifth is to support organisations in meeting their regulatory obligations by setting requirements that could be assured against. This could also help regulators achieve their objectives.

Sixth is to improve links between industry and independent researchers to develop assurance techniques and identify AI risks. This could involve some incentives, and CDEI says it could build on lessons learned in cyber security.

Partnership working

It has added that it is looking to work with partners and will take a number of steps over the next year to deliver on the roadmap. These include supporting the Department for Digital, Culture, Media and Sport and the Office for Artificial Intelligence in developing an AI Standards Hub, and working with professional bodies and regulators to set out assurable standards and requirements for AI systems.

Minister for Technology and the Digital Economy Chris Philp MP said: “AI has the potential to transform our society and economy; and help us tackle some of the greatest challenges of our time. However, this will only be possible if we are able to manage the risks posed by AI and build public trust in its use.

“In the National AI Strategy, we committed to establishing the most trusted and pro-innovation system for AI governance in the world and building an effective AI assurance ecosystem in the UK will be critical to realising this mission.

“I’m delighted to see the CDEI's roadmap published today, and look forward to working with stakeholders, from standards bodies to professional services firms, to make the vision the CDEI has set out a reality.”

Image from iStock, Igor Kutyaev

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.