Skip to the content

DSIT unveils portfolio of AI assurance techniques

08/06/23

Mark Say Managing Editor

Get UKAuthority News

Share

Cyborg woman looking at AI icon over computer screen
Image source: istock.com/Andrew Suslov

The Department for Science, Innovation and Technology (DSIT) has launched a portfolio of AI assurance techniques aimed at supporting the development of trust in the use of the technology.

It has been developed by the Centre for Data Ethics and Innovation (CDEI) and IT industry association techUK as a resource for anyone involved in designing, developing, deploying or procuring AI enabled systems.

The portfolio document says assurance is about meeting criteria such as regulation, standards, ethical guidelines and organisational values, and identifies techniques in areas such as impact assessment and evaluation, bias and compliance audits, certification, conformity assessment, performance testing and formal verification.

It also cites a series of case studies as examples of using the techniques.

Research findings

Nuala Polo, senior policy adviser at CDEI, said it has conducted extensive research on attitudes towards and take-up of tools for trustworthy AI.

“One of the key barriers identified in this research was a significant lack of knowledge and skills regarding AI assurance,” she said in a blogpost. “Research participants reported that even if they want to assure their systems, they often don’t know what assurance techniques exist, or how these might be applied in practice across different contexts and use cases.

“To address this lack of knowledge and help industry to navigate the AI assurance landscape, we are pleased to announce the launch of the DSIT portfolio of AI assurance techniques.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.