Skip to the content

DSIT publishes AI assurance toolkit and techniques


Mark Say Managing Editor

Get UKAuthority News


AI symbol
Image source:

The Department for Science, Innovation and Technology (DSIT) has outlined a series of key measures to build public trust in deployments of artificial intelligence.

It has published an Introduction to AI Assurance that includes a reiteration of the principles, a toolkit of assurance mechanisms, standards for auditing any bias and five key actions.

It says these are aimed at mitigating potential risks while using AI in more advanced digital services.

The document brings up the cross-sectoral principles expressed in the Government’s white paper on AI regulation, published in March of last year. They cover safety and security, transparency and explainability, fairness, accountability and governance, and contestability and redress.

Building on these, the new toolkit involves three key elements: measure how an AI system functions through qualitative and quantitative data; evaluate the implications of its use against agreed benchmarks from standards and regulatory guidelines; and communicate the findings internally and externally.

Assessments and audits

These break down into mechanisms around risk assessment, algorithmic impact assessment, audits for bias and compliance, conformity assessment and formal verification.

It is all underpinned by ensuring data and systems are secure, often by using resources provided by the National Cyber Security Centre.

There is also a need to use global technical standards, notably those from the International Standards Organisation, to support the mechanisms. These cover the foundations and terminology of using AI, interfaces and architectures, measurement and test methods, process, management and governance, and product performance.

Other important developments include the work of the AI Standards Hub and the development of an assurance ecosystem, in which there is a growing market of suppliers of assurance systems and services.

Benefits and risks

In the document’s foreword, Minister for Artificial Intelligence and Intellectual Property Viscount Cameron says: “The UK Government is taking action to ensure that we can reap the benefits of AI while mitigating potential risks and harms.

“This includes acting to establish the right guardrails for AI through our agile approach to regulation; leading the world on AI safety by establishing the first state backed organisation focused on advanced AI safety for the public interest; and – since 2021 – encouraging the development of a flourishing AI assurance ecosystem.”

He adds: “A thriving AI assurance ecosystem will also become an economic activity in its own right – the UK’s cyber security industry, an example of a mature assurance ecosystem, is worth nearly £4 billion to the UK economy.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.