Skip to the content

NCSC publishes principles for security in machine learning

02/09/22

Mark Say Managing Editor

Get UKAuthority News

Share

Cyber lock on grid
Image source: istock.com/Depot

The National Cyber Security Centre (NCSC) has developed a set of security principles for the application of machine learning (ML) systems.

It said the newly published principles, produced for the public sector and large organisations, address the fact that ML systems evolve as they learn how to derive information from data, and that this makes it harder to test for vulnerabilities, which are often missed.

They have been developed to provide context and structure for making educated decisions about threats rather than as a comprehensive assurance framework or checklist.

Each of the principles answers three questions: Why is it there? What are its goals? How could it be implemented?

Several themes run through them, including enabling developers, designing for security, minimising an adversary’s knowledge, security the supply chain, securing infrastructure and tracking assets.

They outline each of these at several stages: the prerequisites for development and wider considerations; requirements and development; deployment; in operation; and end of life.

Raising awareness

In a blogpost introducing the principles, NCSC data science and research lead Kate S, said: “Our goal is to bring awareness of adversarial ML attacks and defences to anyone involved in the development, deployment or decommissioning of a system containing ML.”

She identified weaknesses, including that it can be difficult to predict whether the features, intricacies and biases in the system’s training dataset could affect its behaviour in a way that had not been considered. This creates a potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.

There can also be problems in an inability to verify models under all input conditions, that the training data can be reverse engineered, and when using continual learning they usually need retraining to maintain their performance. The latter needs a reassessment of security every time a new version of a model is produced, which for some applications could be several times a day.

“Ultimately though,” she said, “whatever security concerns are introduced by using ML, there can be wider risks associated with not introducing AI and ML when it’s appropriate to do so.

“In the NCSC, we recognise the massive benefits that good data science and ML can bring to society, not least in cyber security itself. We want to make sure those benefits are realised, safely and securely.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.