Skip to the content

UK Government backs guidelines for secure AI


Mark Say Managing Editor

Get UKAuthority News


Hand holding floating AI icon
Image source: Chandaeng

The UK has placed itself among 18 countries to endorse the first set of global guidelines for the secure development of AI technology.

The guidelines have been developed by the National Cyber Security Centre (NCSC) and the US Cyber Security and Infrastructure Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from around the world.

NCSC said they will help developers of any systems that use AI to make informed cyber security decisions at every stage of the development process, whether the systems have been created from scratch or built on top of tools and services provided by others.

Lindy Cameron, chief executive officer of NCSC, said: “We know AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up.

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

Four streams

The guidelines outline security measures for four areas: design, development, deployment and operation and maintenance.

Among the elements for the design is to model threats to the system, design it with security as well as functionality and performance in mind, and consider the security benefits and trade-offs when selecting an AI model.

Secure development extends to the supply chain, tracking and protecting assets, documenting data, models and prompts, and managing technical debt. A safe deployment involves steps such as protecting infrastructure, developing incident management procedures and making it easy for users to do the right thing; and secure operations involves monitoring the system’s behaviour and inputs, and following a secure by design approach.

Novel vulnerabilities

AN NCSC blogpost said the guidelines are aimed primarily at providers of AI systems who are using models hosted by an organisation, or using externals APIs, but all stakeholders should read them to help make informed decisions. It also emphasised that AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats.

Other countries to endorse the guidelines are the US, Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigerai, Norway, Poland, Republic of Korea and Singapore.

The publication has come soon after the UK Government staged an international AI Safety Summit and launched the AI Safety Institute to evaluate the risks of frontier AI models.


Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.